Be More Assertive about Your Testbench Code

Developing verification environments revolves around writing checks. We need to separate the concepts of checking the DUT from checking testbench code. DUT checks represent the "business logic" of our verification software. The code we write isn't perfect, though. Sprinkling the testbench with checks of its own helps to ensure its correctness by catching programming errors at their source.


This is a companion discussion topic for the original entry at https://verificationgentleman.netlify.app/2015/09/14/be-more-assertive-about-your-testbench-code.html

First, let me say – YES! Everybody should we writing more assertions inside their TB code. They are the best way to immediately catch any unexpected behaviour. And, yes, we shouldn’t be using the bare ‘assert’ statement, and should go with a macro.

Now, for me, the main reason for using the macro is to be able to control the error message and reporting, not so much for the case where assertions are turned off and we don’t run the code inside it. There’s a big difference between a testbench and a game using Unreal engine :slight_smile: – in case of a game, it is imperative to squeeze every little bit of performance out of the code, and it’s never acceptable for the game to crash with an internal code assertion. Therefore, production code will always turn off assertions. In testbenches, however, there is really no good reason to ever run without TB assertions on (IMO) – the relatively small increase in performance is not good enough to offset the possibility of (1) missing bugs (both TB and RTL), and (2) making it more difficult to categorize and understand fails.

As for improvements to the assert macros, here are a few things that my assert macro has:

  • An optional second ‘msg’ parameter that describes the failure. This is much more user-friendly than “Assertion ‘x >= 0’ has failed”, which usually means little without context. It also encourages self-documenting code: `assert(x >= 0, “sqrt() can’t be called with negative numbers”)
  • Speaking of context, I add \_\_FILE\_\_ and __LINE__ to the fail message, so that each assertion fail is easy to find, and to give additional uniqueness to each assert fail when categorizing fail signatures
  • Before the fatal error, I throw in $stacktrace; The majority of assertions check function input parameters, so when one of them fails, all you know is that someone called the function with illegal parameters. $stacktrace shows you exactly where the call came from.
  • To play nice with UVM testbenches, an ifdef can select between $fatal and uvm_fatal.

At work I also have a version of the macro where I can specify an extra message. You make a great point about self documentation when using it! It’s something I’ll add soon.

Specifying \_\_FILE\_\_ and __LINE__ in the message is unnecessary, since calls to $fatal are required by the standard to print both these things and the scope where they originate.

I had no idea $stacktrace() existed. That’s also a great idea.

I also thought about this topic, but I’m not really sure how to proceed. On the one hand, I could make a macro that only handles printing the error message. Users could override it to call something else entirely (uvm\_fatal or whatever). This feels a bit like overthinking it (and I have a very strong tendency to do this). On the other hand, UVM is kind of ubiquitous when using SV, so acknowledging its existence by creating such an ifdef seems reasonable.

uvm\_fatal message is printed using the UVM report server format, which usually truncates file paths to something that fits in N characters. So, we can't rely on that for full filename/number of the assertion location in UVM testbenches. That, and in order to make asserts without the optional message more user-friendly, are the main reasons why I still put the __FILE/LINE__ into the error message.

another good post !

one thing that I think should be addressed is the coverage of the assertions (and “check”)

I found that covering the checkers “triggering” is a wander full way to find about missing checkers and branches that were not hit by your simulations (due to missing stimulus, or, more often, disabled checkers and bad “corner case filtering” in the checkers )

This is what I mean when I say that ‘assert’ is tightly integrated with tools. I like to tag checks (in procedural code) with:

SOME_CHECK_NAME : assert (some_condition)
else
`uvm_error(…)

This way I can back-annotate them to the verification plan. Using ‘assert’ for sanity checks would just create noise.