Open mwu-ponyai opened 4 years ago
This is unfortunately the great tragedy of building a hermetic configuration language based around floating point, which differs from system to system.
Different math libraries have different algorihtms with different performance / accuracy characteristics. Also different CPUs have different sized registers for floating point (or can be configured via flags that are set at the process level to change the floating point representation). IEEE only defines the format for the number itself, not the computation or the format of the intermediate values.
IEEE only defines the format for the number itself, not the computation or the format of the intermediate values.
I don't think that's correct. A big chunk of the standard describes various operations and the expected results. Chapter 5, "Operations", starts as follows:
All conforming implementations of this standard shall provide the operations listed in this clause for all supported arithmetic formats, except as stated below. Unless otherwise specified, each of the computational operations specified by this standard that returns a numeric result shall be performed as if it first produced an intermediate result correct to infinite precision and with unbounded range, and then rounded that intermediate result, if necessary, to fit in the destination’s format (see Clause 4 and Clause 7). Clause 6 augments the following specifications to cover ±0, ±∞, and NaN. Clause 7 describes default exception handling.
So, while some operations we have might not be mentioned by IEEE-754, I expect simple ones like sin, cos, *
, +
, have a precise expected results. Of course underlying implementations may still be non-conformant.
In the example provided by @mwu-ponyai, there seems to be an extra digit in C++ results. I wonder why it's like that.
It looks like it's the cos function that's causing the difference:
# go-jsonnet
$ ./jsonnet -e "std.cos(2.9452431127404308)"
-0.98078528040323032
# c++ jsonnet
$ ./jsonnet -e "std.cos(2.9452431127404308)"
-0.98078528040323043
cos is one of the recommended operations that must be rounded correctly, assuming the wikipedia article on IEEE-754 is accurate.
In the example provided by @mwu-ponyai, there seems to be an extra digit in C++ results. I wonder why it's like that.
I'm guessing the float to string conversion code being used there isn't giving us the minimum number of digits. Node trims that extra digit off. Somewhat surprising, since I assumed everyone just copied dtoa.c
Ok yeah it's more complicated than what I said. Some thoughts:
Intel 80 bit vs 64 bit is not actually an issue since we write all intermediate values to RAM. This is an issue if you have e.g. double x = 1.0/3; assert (x == 1.0/3) which is false for certain compilers.
-ffast-math also isn't an issue.
However the IEEE operations don't even include division I think? Certainly none of the complex ones based on taylor expansions, or whatever., since those are not just about representation but about how you explore the space to hone in on the answer, what lookup tables you use to accelerate it and stuff like that. Cos would fall into that category.
It may be that Go and C++ would be compatible on the same machine if Go uses the native math library. But I would expect C++ on e.g. intel / linux to be different to C++ on something more esoteric like AIX / power.
It's worth fixing specific issues to minimise the damage, but general compatibility is probably not possible.
Ok now I'm thinking I read it wrong and the standard actually does require everyone's cos() to behave the same. However I have definitely observed divergences between systems before!
Intel 80 bit vs 64 bit is not actually an issue since we write all intermediate values to RAM. This is an issue if you have e.g. double x = 1.0/3; assert (x == 1.0/3) which is false for certain compilers.
On x86_64 you can assume SSE2, so compilers usually don't generate x87 code unless it's requested.
However the IEEE operations don't even include division I think? Certainly none of the complex ones based on taylor expansions, or whatever., since those are not just about representation but about how you explore the space to hone in on the answer, what lookup tables you use to accelerate it and stuff like that. Cos would fall into that category.
Your impression of the IEEE-754 standard is not wrong based on the 1985 version, but the standard has been updated to include functions that should be correctly rounded. See https://books.google.com/books?id=eNaCDQAAQBAJ&pg=PA220#v=onepage&q&f=false
In general the algorithms and software available for implementing elementary functions are quite good these days, and IEEE-754 has been updated to reflect the set of functions that are sufficiently solved such that correct rounding should be expected.
Interesting, thanks. I remember we used to use --fpmath=sse to force the issue on x87.
I've found some cases where go-jsonnet doesn't match jsonnet.
C++ jsonnet:
go-jsonnet: