So in this example I added a penalty for failing tests, but it would be better to be able to tag a test as failing and still get the performance.
Possibly, tagging the test as failing could mean testing the value of a variable in the Teardown after a single run, or assigning that variable and test it in the teardown itself.
The example has the following test case for instance:
arr instanceof Array;
The Teardown could implement
result=
which would be in the example:
result=arr instanceof Array;
and then the Teardown would also have the (user written code):
if(!result)testsuccess=false;
The value of 'testsuccess' could then be use to tag the testcase as failing.
The actual implementation may be different, the above suggestion is mainly to indicate more precisely what I mean.
When comparing performance, it is also important to know if the compared functions have the same results/work properly.
Example: Original without checks: http://jsperf.com/check-isarray/2 Modified with penalty for failing tests: http://jsperf.com/check-isarray/3
So in this example I added a penalty for failing tests, but it would be better to be able to tag a test as failing and still get the performance. Possibly, tagging the test as failing could mean testing the value of a variable in the Teardown after a single run, or assigning that variable and test it in the teardown itself.
The example has the following test case for instance:
The Teardown could implement result=
which would be in the example: result=arr instanceof Array;
and then the Teardown would also have the (user written code): if(!result)testsuccess=false;
The value of 'testsuccess' could then be use to tag the testcase as failing.
The actual implementation may be different, the above suggestion is mainly to indicate more precisely what I mean.