Closed Paebbels closed 1 year ago
Revision on Dev does this:
And will fail on analyze errors. So if the build is just analyze, it will provide a proper failure and non-zero exit code - at least to the simulator. No telling if the simulator eats that error code or if it passes it on. We will have to try it.
Passed indicates everything done in the build was successful. If the build only has analyze, then PASSED indicates that all analyze were successful. It is up to the user to understand what they were building.
We could change, PASSED to SUCCESS in all cases, and it would be pedantically appropriate, but it would not communicate the message to the community as well.
@Paebbels Now that Analyze and Simulate Errors have been added, I think PASSED is clear. PASSED could be downgraded to SUCCESS if no simulations were run, but right now that seems like busy work.
For a script that just creates a library or just analyzes design units, if there are no errors, then print what? The only thing I can come up with is PASSED. It is most certainly not SKIPPED since actions were done. It is most certainly not FAILED.
Lets suppose we added a fourth status value that would embody that the script ran successfully, but there were no simulations. Then maybe we could create one that says SUCCESS. However, then we will get questions, what is the difference between SUCCESS and PASSED as they both seem to communicate the same information.
Also note that if you try to run a script and the script does not exist or it fails, then the status FAILED will be printed.
When there are no test cases run, and the script did not produce errors, should we print success rather than passed? Would that resolve this issue?
The scripting (writing individual *.pro
files) has 2 use cases:
right?
In case 2: As we run simulations, we want to report a passed = failed = skipped = 0 as FAILED, because a simulation script should run at least one simulation. In OSVVMs VHDL code, we have a sanity check for a minimum of triggered assertions, otherwise we call that test cases failed due to not enough affirm calls.
In case 1. No tests are executed. We know for sure, if analyzed error > 0, we want to report it as overall FAILED. But if passed = failed = skipped = 0, we don't care and it should report SUCCESS, because we just compiled code.
The question is now, can we safely distinguish both cases based on executed TCL procedure calls and gathered statistics, or do we need a hint from the user saving this is just a library compilation script, no simulations intended?
The scripts take care to mark the intent to run a test case, so if the test case fails to run, we know and report FAILED.
When building OSVVMLibraries, but not running any regression tests e.g. AXI4 (
RunAllTests.pro
), then the HTML summary report shows statusPASSED
.This suggests regression was executed and all is OK, but it was never run.
This situation could be identified by checking
passed = failed = skipped = 0
.Current behavior:
Expected behavior: Report a status of
SKIPPED
, when passed count = 0.Maybe display the status row in yellow or orange.
Other status values might be: