Closed AstroChara closed 10 months ago
There are two main floating types used by Celestia - float
(32 bit) and double
(64 bit). Floats use 8 decimal digits, doubles use 16. We read values from ssc files as double
so it's better to use 16 digits.
@ajtribick please correct me if I wrong.
The values in max_digits10
are the number of digits required to survive a float-string-float round trip (9 and 17). However that's not the relevant case here. The maximum number of decimal digits that can affect the result when parsing a decimal number to the nearest floating point value are 112 and 767 for float and double respectively. (See discussions in https://github.com/serde-rs/json/issues/536)
This is much larger than the number of significant digits in any realistic sources, so there's not much point rounding those.
If on the other hand the number is a result of post-processing done at double precision, then using 17 digits (or the shortest unique representation if whatever system is being used can produce that) makes sense.
@JiliTheSpaceboy use 17.
Working on adding new objects (2003 AZ84 and 2013 FY27) into default, a discussion on significant digits came up. Data from NASA HORIZONS system gives 16 significant digits, but Celestia only uses 8 decimal digits, which makes me wonder if we really should be keeping up to eight extra sigfigs around when they are not going to be used.