Open TysonRayJones opened 4 years ago
Sounds like a good idea to me. The new keyword however is not necessary imho. It could be something like REQUIRE_INTERNAL
. But then again it could be simulated already with plain if
and FAIL
in case the condition was false... that could be even wrapped in your DEMAND
I suppose.
It's not immediately obvious to me how to do this well myself though. You're right that I can just do:
#define DEMAND( cond ) if (!(cond)) FAIL( );
but this has two problems
false
. It looks like I can't merely insert INFO( cond )
, which won't compile.On another note, do you have any idea why this seemingly more idiomatic way fails?
#define DEMAND( cond ) { \
CHECKED_ELSE( cond ) { \
FAIL( ); \
} \
}
It compiles fine and without any warnings, but when run, stranglely outputs (for tests where cond
is always true
!):
:4545469632: FAILED:
then freezes.
Are there any more thoughts on this? It remains exceptionally useful to my use-case
Sometimes unit tests need to perform their own calculations to compare against those of the tested code. For example, in my use-case, I check that an optimised simulator is producing the same results as a slow direct mathematical evaluation.
This often requires some quick checks of the internal testing code, to check pre- and post-conditions of the helper functions, to help distinguish a bad test from an error in the tested code. This is especially helpful for external contributors writing their own unit tests which use the helper funcitons.
Right now, I use
REQUIRE
in the testing, and in checking the integrity of the internal code. Here's an example; a unit-test which performs its own matrix calculations in order to replicate a tested code's result (which is computed in an entirely different way), and utilises internal testing code:This means though that in the ultimately performed unit-test, the checking of pre & post-conditions of the helper functions is included in the statistics of the unit-tests (total number of assertions passed).
Instead, it would be great if there was a seperate "meta" macro, which asserts conditions the test-author believes is tautological, and which when failed (indicating bad unit-testing), stops all testing.
E.g. something like
Is there currently any way to do this, or at least to seperate the statistics of 'internal'
REQUIRE
s from the 'testing'REQUIRE
s? Otherwise, is there a way to write custom macros and 'hook' them into Catch's control-flow (rather than just having them force-exit)?