Closed Reptorian1125 closed 4 months ago
The math evaluator stores numbers as IEEE doubles, which have 52bits of Significand. So no, you should not be able to define a binary litteral with more than 52 digits without losing precision.
That`s the thing though, that number is 2^31-1. Much less than 52. So, what is wrong here?
Ah I guess I know why.
I'm using std::strol()
for the binary to integrer conversion, and this function returns a long int
.
On Windows, the bad news is that long int
is actually 32 bits only (it's a shame), while this is usually 64 bits on Linux.
Here, (Ubuntu) :
$ gmic e "{0b{\`vector52(_'1')\`}}"
[gmic]./ Start G'MIC interpreter (v.3.3.6).
4503599627370495
[gmic]./ End G'MIC interpreter.
I think I may use std::strtoll()
in the future, that returns a long long
(which is 64bits on Windows).
I got the same result as you do with the changes, thank you so much for fixing it. Now, my rep_bin2dec will work on Windows in case which this bugs appears before your changes.
I'm sure that higher number can be supported, but apparently there seems to be a limit:
This number shows up: 2147483647
Is there a remedy to this? At least up to the representable double number.
And also, do hexadecimal literal suffer from this too?