Open marmidr opened 8 years ago
[removed my comments]
Partly this is due to the behavior of the "%.*g" format specifier.
The precision used in actual comparisions is 7, because a precision of 2 will be needed to print "0.1" and so on.
I agree that it would be nice if the precision could be tied to the threshold used - any idea how to implement that?
One thing that could be done is to have a macro that allows for specifying the precision manually.
Finally, the reason we chose the value of 7 is that some test would fail on 16 bit systems with a precision of 10 (that's what it used to be before). We could up it again if we found a way to make sure those tests don't break.
Can u take a look at this? https://github.com/mmidor/cpputest/commit/15ff4a8b297db84ea3316baa01947a76c5c6b672 Problem solved in an easy way. But as I know you will not like it ;)
Does it pass on Travis CI?
I created a new pull request - tests passed.
Let's see what @basvodde has to say ;-). If the tests pass, then it's fine with me.
We'll definitely want to see tests for the new behavior though. Testing for prec==49 sounds reasonable but would definitely fail on 16bit...
Mmm, right, the maximum double precision may depend on platform. I'm not expert in this area, any clues? I'll fix this according to your suggestions.
I don't know enough about this myself. There's probably little point in testing values between 0 and 7. Also, I have no idea what happens once values get displayed as 1.234e+3. I guess your code would just work fine, but I'm not sure about how best to test it.
That is the stupid thing. It should work fine on all platforms and under all conditions, as far as I can tell.
Writing platform independent tests is what is tricky. Perhaps just write tests as you normally would, maybe it works everywhere but Dos, then perhaps we could ignore the test for Dos......
Or, if we set the initial value of prec
to 1
then your code would kick in immediately and could be reasonably tested with precisions <7.
This approach will be better:
https://github.com/mmidor/cpputest/commit/d5d567b20e9b5396e8cf4ee0a34026af538e861b
I like (sexp == sact)
:kiss:
This code looks very good. I tried it -- but found that %.*g is a tricky animal :-(
I know this is old, but ... the thread talks about a PR and I cannot find that PR ?
Hi, today I was running my tests seeing something like that:
Failure in TEST(tracker, kalman_cov)
expected <-11199.1>
but was <-11199.1> threshold used was <0.001>
Interesting, isn't it? :) After a while I found the guilty code:
SimpleString StringFrom(double value, int precision = 6);
I'm surprised that print output precision is independent of given comparison threshold. Lets say If I compare numbers like 123456.7 or bigger with threshold 0.1, then I will not see why is the test failing.