One of the most useful additions of OVM 2.1 was the objection mechanism. You could raise an objection when starting your main traffic sequence and drop it once it was finished, thus stopping the simulation. Being done with input traffic didn't mean that nothing else would happen from that point as the DUT may have required some additional time to drain any transactions that were still being processed. Setting a drain time added an extra delay from the time that all objections were dropped to the stop of the simulation, making sure that there were no outstanding transactions at that point.
I have another solution … don’t use drain time! I find that the drain time is only needed when the _scoreboard_ isn’t objecting, or at least not objecting correctly. Furthermore, setting the correct drain time is a painful process: you set it to a value that is too low, then you increase it a bit (being careful to not be too generous), it’s now working great, until you run the testcase that needs some more, then you increase it again, etc. You will side-step this headache entirely if you don’t use drain time and put objections in the right spots.
I get what you mean, but the whole objection mechanism is a bit broken in UVM. Some technologists (especially the ones from Mentor Graphics) recommend only setting objections in the top level sequence started by the test. This way you know you finished driving your stimulus and add a conservative drain time to make sure that all traffic has been responded to by the DUT. The reason they say this is because objections from components have to propagate up the hierarchy and this wastes CPU cycles to simulate (they even had numbers to back it up). If you can afford the performance hit, then I agree, setting objections is much better. From what I know they’ve addressed this problem in UVM 1.2 (though they created new ones) and objections don’t need to propagate along the whole hierarchy anymore, making the whole objection mechanism faster.
In this post you mentioned “I can’t just set the drain time in the base test anymore and forget about it. Calling super.run_phase(phase) in subclasses is out of the question for obvious reasons.”
Can you specify what those obvious reasons are? I was thinking that as long as run_phase() task call in base test is virtual, the sub-tests could still call super.run_phase() without any issue.
The person who develops the testbench isn’t always the same person that writes tests. I would rather have my setup in such a way that everything is configured and test writers just have to start sequences. Making them call super.run_phase() inside their tests means they have to concern themselves with some testbench issues and I would rather avoid that. In OVM this was possible (because there were no phase specific objections), but in UVM I had to do the call to super.run_phase() thing this until I found this approach.
Coming back to the second half of your question (assuming we don’t set the drain time like in this post), with a test that directly inherits from the base test this isn’t a problem, because you can just call super.run_phase() as the base test doesn’t run any stimulus. The problem I always had was when I wanted to apply the same setting to a test that inherits from a test that inherits from the base class (a grandchild class). This means calling super.run_phas() in the grandchild class will execute all of the stimulus of the child class (the one that directly inherits from the base test) as well and this isn’t what you might want.
In my experience, objections certainly aren’t broken but are in fact essential ingredients to a healthy test bench. There is only a performance hit when objections are abused. (You know you are over-using objections if a component has an objection count over 1, or if an objection is raised and lowered many times in the same time-step.) Proper use of objections will not only determine exactly when to end the simulation, but will greatly help debugging deadlocks, enable self-adjusting test cases and provide a foundation for the elegant UVM heartbeat monitor.
Mentor has a case, but they shouldn’t scare away sensible use of the methodology.
I noticed that I passed this as the first argument to set_drain_time(…). This means that only objections that are raised by the test itself or any of its sub-components will trigger this drain time when they are subsequently dropped.
If you’re raising objections from sequences (via the starting_phase variable), then this won’t work. This is because sequences aren’t components, so they aren’t technically children of the test. To set the drain time globally, we need to set it for uvm_root by calling set_drain_time(null, …)
Great post! One thing is not clear though, near the end of the post you have 2 code snippets to set the drain time. The first does it for the run_phase, the second for the main_phase. And you wrote “If we try to do the same for the main phase, after setting the drain time it just doesn’t’ work.”
Did you mean that the first method (using ::get() ) does work for the run_phase but not for the main_phase? I don’t understand why that would be the case. Or maybe I misunderstood that sentence and you were maybe meaning using ::get() to set the drain time is not working for any of the phases?
Calling uvm_main_phase::get() and trying to set the drain time on its result doesn’t work. The get() returns a different object than the one that gets passed to main_phase(uvm_phase phase) as the phase argument.
I mainly wanted to verify that “Calling uvm_main_phase::get() and trying to set the drain time on its result doesn’t work.” holds in general, i.e. that rule holds for any phase and wherever you call it. So
also doesn’t work. If first interpreted your post as if this particular one was working but not if that would have been main_phase.
It is clear now that for any phase I need to set the drain time on the object passed as phase argument (so in that phase itself) or using the find_by_name if I want to set it for a phase that I am curently not in, right?
For some reason, setting a drain time on the result of uvm_run_phase::get()does work in UVM 1.1d, even though it returns a different object than the one passed to run_phase(…). It doesn’t work for uvm_main_phase.get(), though. I’ve no idea why.
They made some changes in UVM 1.2, so it might not work for the former anymore either.
You’re better off using find_by_name(…), because that always works.
You can also use phase.find(uvm_main_phase::get(), 0). This is better because you avoid using strings. The second argument has to be 0. For the run phase, a value of 1 (i.e. stay in scope) also works. Again, no idea why, as this is something that concerns the implementation.
I kind of find the whole thing rather confusing and not so well documented (from a user point of view I mean).
“I kind of find the whole thing rather confusing and not so well documented (from a user point of view I mean).”
I agree As I find with more things in UVM. Luckely there are sites like this that give more insight. I would actually prefer a UVM 1.3 with better documentation, a split in ‘user’ API and ‘developer’ API, and deprecation of ‘old mechanisms’ over a bunch of new features.