Closed bowring closed 4 years ago
So if we have 15 digits, can we calculate in seconds and report in Petaseconds? It's a surprisingly useful unit. (1 Ps ago is Oligocene; 10 Ps ago is Carboniferous; 100 Ps ago is Mesoarchean).
Note that this is not standard, and 98% of geochronologists would strangle anyone who tried. But with 15 digits we're looking at reporting down to the nearest handful of seconds for the Paleozoic, and a minutes or two for the Archean, so I figured I'd bring it up.
@cwmagee the 12-digit thing was an artificial device we implemented to try to minimise divergence between double-precision arithmetic performed by Excel 2003 using VBA 6 (and Microsoft's unique version of double-precision calculation and presentation), and that performed by Squid3 using Java Math, which does actually utilise the IEEE standard. Plus Ken did things in the numerical perturbation routine (for generating isotope-ratio %errs, and also NU-switch expression %errs) which are conceptually brilliant but utterly unreplicatable outside of VBA 6 (including, to our consternation, Excel 2003 spreadsheets - it is possible to write an expression in VBA 6, write the same expression in the associated Excel 2003 worksheet, evaluate both, and get very slightly different answers, which is sad).
So there is a bunch of arithmetical noise which stops us replicating SQUID 2.50 output perfectly in Squid3, even when all the "settings" are identical. Because most of the noise appears in the last 3 digits of the 15-digit decimal representations of the double-precision numbers, we implemented 12-digit rounding in both SQUID 2.50 and Squid3, at the point where isotopic ratio calculations are completed but before any of the U-Th-Pb processing starts, to try and "chop off" the divergence and generate a new set of identical numbers that could be used to try and replicate the U-Th-Pb calculations perfectly. But none of that really works as well as we (I) had hoped it might.
So we are going to step back out the 15-digit decimal representations everywhere (which requires both Jim and I to modify our codes and regenerate our test-files!), and tolerate the resulting noise. There are other limitations anyway, which we can't control... we have noticed that the GSC Tasks, which utilise an exponent/slope of 1.8 in the classical 2D zircon calibration show noticeably greater divergence between SQUID 2.50 and Squid3 than the same calculations using exponent/slope 2, which probably reflects limitations in Microsoft's handling of exponents that are not easily or precisely represented as binary numbers. There's nothing we can do about it. All we can do (and have done) is look for systematic offsets between SQUID 2.50 and Squid3 outputs as a means of finding bugs in the code (on both sides), and having eliminated those, we will need users to take us on trust the rest of the way. (That is, once we release the "final" bug-fixed version of SQUID 2.50 as a basis for Squid3 comparisons. Because, as Andrew @cross-a has reminded me over the last couple of weeks, the 'normal' versions of SQUID 2.50 in current use in the community are unsuitable for that purpose - too many bugs!)
A working "extrapolate to midpoint" version of SQUID 2.5 would be really handy for investigating very low excess scatter data, as the individual spot errors should be lower.
completed as of v1.5.9 to be released soon
@sbodorkos and @bowring have decided it is time. This involves also switching to Java's strict math library(https://docs.oracle.com/javase/8/docs/api/java/lang/StrictMath.html). Any objections?