Open th-otto opened 2 years ago
Some more info: long ago someone already managed to decode the format:
http://www.tho-otto.de/hypview/hypview.cgi?url=%2Fhyp%2Fgfabasic.hyp&charset=UTF-8&index=24
So there are actually only 48 bits of mantissa, and a 16 bits exponent which also carries the sign (as mentioned above), That is a format that cannot be represented in an IEEE 754 format (the exponent can have values up to 999 decimal)
when specifying something like
in the source, gfalist will print the number as zero.
Using some debug printf, i found the representation of such a number is (as raw binary data)
800000000000fbf6
and after conversion by dgfabintoieee3f6000000000001f
That looks strange, because dgfabintoieee does not use the high bit of the first byte (which seems to be correct when printing the number in decimal)
Then i looked what the compiler generates when a double a assigned to an int. The function looks like this:
A0 on entry points to the original raw bytes. So it seems that the high bit of byte 6 seems to used as a sign (it will be loaded into the low 16 bits of d2)
Then i wrote this function (operating on the original raw input), which is just a conversion of the above assembler code:
And use it like this later:
And that seems to work properly ;)
Looks also like dgfabintoieee (which is now only used when converting to decimal or real floats) has to be slightly adjusted. If if understand the code correctly, the actual format of doubles in gfa is: 48 bits mantissa, 1bit sign, another 4 bits of mantissa, 7 bits exponent.
But this is currently wrong only for the least significant 4 bits of the mantissa.
PS: sorry for not providing a patch, but i had already reformatted the source.