In the old days, people had to write all of their tests by hand. With chips getting bigger and bigger, it became clear that this painstaking process couldn't scale. Constrained random verification was invented to help us verification engineers deal with the increasing complexity of our DUTs. By describing the kind of stimulus we want to drive and letting the random generator do its thing we can verify more with less effort. Random tests are nice and all, mostly because they are easier to write, but this all comes at a price. It's much more difficult to say what a random test is really doing without letting it run. We typically write coverage to log what we are actually stimulating.
This is a companion discussion topic for the original entry at https://verificationgentleman.netlify.app/2015/05/23/keeping-constraints-and-covergroups-in-sync.html
About update the endpoint, I think you can use
$bits() in your better_cov_collector.svh. It can be updated as 2**$bits(it.x) and 2**$bits(it.y).
I haven’t figure out how to do it in cov_collector.svh yet.
Beware!!! This solution trespasses the separation of concerns and can hide generation constraints or coverage definition errors. As long as you implement generation and coverage as separate concerns you still have a chance to discover unintentional errors in either of them and/or in RTL. Molding coverage definitions to generation constraints eliminates the natural safety net of generation vs coverage implicit checking.
At the same time Accellera is looking into unifying the two concepts in their Portable Stimulus working group. Sure you can make mistakes when defining either one, but that doesn’t mean you can’t protect yourself against them (Neil had an example of unit testing coverage groups and you could do something similar for constraints - might make a good post actually).
Also, at the end of the day you’re going to analyze your coverage so you can catch problems. When you change your coverage code you simulate and analyze the bins to make sure they’re all right. What’s to stop you from doing the same after changing your constraint code?
I’ve tried to achieve the same sync between constraints and coverage in e, but I haven’t managed yet. I’ve tried to somehow coax an ‘is_all_iterations(…)’ constraint to generate all legal values of the struct being covered and use that in a define as computed macro to list the ignore bins. Specman doesn’t allow me to call ‘gen’ from within define as computed macros (as their bodies are called before ‘generate()’).
If anyone is interested, I could make a post of those tries, even though I didn’t end up with any solution. Maybe it would inspire someone else to find one.