Open oprogramador opened 2 years ago
So yes, this does seem kind of an errornous behaviour, until we look more closely.
I feel like this is an excellent demonstration of the need for chai-roughly
in the first place, we often forget that floating point numbers are only an approximation, which is why it's generally a bad idea to compare two floating point numbers a == b
but rather we should ensure their difference is below some tolerance level |(a - b)| <= tolerance
.
Anyway, long story short, when calculating the difference between floats 300.1
and 300
you get:
$ node
> 300.1 - 300
0.10000000000002274
but for 123.1
and 123
you get:
$ node
> 123.1 - 123
0.09999999999999432
So it all comes down to whether the floating point approximation of the difference falls below of above the tolerance - but the test in my opinion is bad to begin with because we are testing effectively expecting (a - b) == 0.1
, which is the exact thing not to do with floating point numbers. 😄
In my opinion, this is as intended 🤔
That was my guess.
However:
> 300.1 - 0.1
300
That was my guess.
However:
> 300.1 - 0.1 300
Yes sometimes you get lucky and depending on the operands of the floating point calculation you may get "nice" results like those.
Also that re-arranged calculation does not really lend itself to a comparison to the tolerance, I guess one could do something like figuring our the bigger of the two operands, and then subtracting the tolerance from the bigger and checking if that is less than or equal to the lesser, so in this case (300.1 - 0.1) <= 300
which would be true
.
So in general terms it would be (larger - tolerance) <= smaller
.
But all we are doing is skewing the calculations in favor of the "nice" examples we have found, which we would like to work despite our test effectively trying to test for a == b
which we shouldn't do 😅
We will likely be able to find cases of numbers and values of tolerance where this approach would lean the other way and generate a failure, in cases where larger - tolerance
produces a floating value which leans the other way and still produces a floating number greater than the other operand.
Or as we obtain 0.10000000000002274
instead of 0.1, maybe the real tolerance should be a bit larger than the tolerance provided as the arg? Possibly, with some additional option, maybe global (set e.g. in chai.use
) to avoid a need to pass it in every invocation. The tolerance could be multiplied by 1.00000000001 or something like that. We need to figure out what's the max possible error in JS (which might be the same as in Python and other languages, depending maybe on the CPU).
The error would depend on the difference in size of the two floating point numbers being used, i.e. the larger the number, the higher the value of the exponent part, meaning more of the mantissa is dedicated to the non-fractional part of the number, whereas smaller numbers has a lower exponent meaning more of the mantissa is used for the fractional parts.
In general precision should be lower when calculating with two numbers of very different exponents, they have less "overlap" so to say.
I think it would be a disservice to try and hide these facts of working with floating point numbers by trying to be "smart", it's something a developer ought to be comfortable with and to some degree understand and/or expect. Too often we (as developers) don't consider the inaccuracies of floating point numbers 🙈
For reference: https://www.doc.ic.ac.uk/~eedwards/compsys/float/
Note how in the end the mantissa is rounded off when it doesn't fit, which will have a higher chance the more different the two floating point numbers exponent parts are, which makes sense if we think about it as a floating decimal point within a fixed-width integer (which is kind of the same but not entirely), i.e. say we allowed 5 digits and could put the decimal anywhere, if we tried adding 1000.5
and 0.00001
we would get 1000.50001
but we have to round it to 1000.5
because we only allow 5 digits of precision, and we always have to prioritize the non-fractional parts.
This is of course a bit simplified, but it demonstrates the weaknesses of floating points in that very different exponents lead to larger inaccuracies in the fractional parts of the number
Possibly, we could convert float into BigInt
or something like that to fix the precision errors but I'm not sure whether it's worth it.
True, that would require mapping decimal numbers into an integer value by storing the exponent separately probably as another BigInt
and doing much of the same arithmetic operations on the exponent and mantissa as done by floating point numbers, but without rounding as we can have any size BigInt
's to represent the two components.
Could also look for existing implementations of something akin to BigDecimal
.
It does smell like killing flies with nuclear missiles though, to handle the edge cases when the tolerance
is too close to the difference between the two floating point numbers 😄
In actual use, you wouldn't write tests cases like expect(300.1).to.roughly(0.1).equal(300)
, it's kind of a synthetic test to be honest.