CIRDLES / Squid

Squid3 is being developed by the Cyber Infrastructure Research and Development Lab for the Earth Sciences (CIRDLES.org) at the College of Charleston, Charleston, SC and Geoscience Australia as a re-implementation in Java of Ken Ludwig's Squid 2.5. - please contribute your expertise!
http://cirdles.org/projects/squid/
Apache License 2.0
12 stars 24 forks source link

Comparison of Custom Ln(208/232) and NU-switched Ln(Pb/Th) #696

Closed AllenKennedy closed 2 years ago

AllenKennedy commented 2 years ago

Can we provide an explanation of the difference for a user Screen Shot 2022-03-29 at 11.48.37 am.zip Screen Shot 2022-03-29 at 11.48.50 am.zip Screen Shot 2022-03-29 at 11.49.07 am.zip ?

AllenKennedy commented 2 years ago

I had expected both axes to be identical and the regression to be a 45 degree line?

AllenKennedy commented 2 years ago

Here is the .squid file

XENOF36A18210323 16pk Dy, Ho, Yb.squid.zip

AllenKennedy commented 2 years ago

An odd regression. Is there a strange pt located outside the plot window that is skewing the regression Screen Shot 2022-03-29 at 12.34.47 pm.zip ?

sbodorkos commented 2 years ago

Screenshot 1: InkedScreen Shot 2022-03-29 at 11 48 37 am_LI

Screenshot 2: InkedScreen Shot 2022-03-29 at 11 48 50 am_LI

@AllenKennedy the source of the divergence is analogous to the NU- vs FO-switches in SQUID 2.50.

The first expression (where "NU-switched" is blank) is simply taking the natural log of the spot 208/232 value (and ignoring the associated %err value). This is the equivalent of the FO-switch in SQUID 2.50, and you could verify this result from Squid3 output by using Excel's LN function with the spot 208/232 value as the argument.

The second expression (with "NU-switched" ticked), is calculating N-1 values of ln(208/232) [where N is the number of scans], with each value time-interpolated between scans. Uncertainties for each interpolated value are calculated numerically by perturbing each of the inputs. A "spot-mean" value (and its uncertainty) is generated either by taking a time-invariant weighted mean of the interpolated values, or by linear regression to burn midtime. This is the equivalent of the NU-switch in SQUID 2.50.

Here is the relevant section of the SQUID 2.50 Manual which compares and contrasts the two calculation methods:

image

The detail of the arithmetic is given in the SQUID 2.50 User Manual, from the base of page 43 to the base of page 46. All of these steps have been faithfully replicated in Squid3. The PDF is here in case it's handy:

squid2_user_manual_2.5.zip

Re your second regression, I'm not sure what the issue is there.

bowring commented 2 years ago

For second issue, change LnpbU to be NU-switched and then both terms will have uncertainties and the regression will work. BTW it is best to not use "/" symbol in expression names - Squid will ignore it and will eventually remove it.

see attached image: regression

bowring commented 2 years ago

BTW, you could also alter LnpbU to have a constant 1 percent uncertainty for example: valuemodel(ln(["206/238"]), 1 , false)

which will also plot correctly

AllenKennedy commented 2 years ago

Hi Simon, Thanks for the explanation. I was aware of the different switches. I don't like identical typeset equations giving different results, as it has the potential to confuse users. Also, I don't think we should ever let users of SQUID3 produce numbers without uncertainties, no matter what was done in SQUID 2.50.

sbodorkos commented 2 years ago

Hi Allen, I don't know what we can do about the typesetting of expressions. We mimicked SQUID 2.50 because I had neither the authority nor the expertise to do differently. And one benefit is that we do get to refer to the SQUID 2.50 documentation to explain aspects of the functionality (even if not all of that functionality is desirable).

We can't mandate errors in inputs, so I don't think we can mandate errors in outputs. And there are plenty of areas of the data-processing where uncertainties (both input and output) are unimportant, unquantifiable, or both. Anything involving U (ppm) is probably a good example.

AllenKennedy commented 2 years ago

Simon, Maybe the uncertainties on some parameters are too small to change calculated results and affect outcomes, but that doesn't mean they should not be calculated or given if it is possible. At the simplest level, counting stats (Poisson distribution) gives us an uncertainty for every peak we measure and at least an approximate uncertainty for every ratio. Truly unquantifiable parameters is a very scary concept and they should be identified in some way.

sbodorkos commented 2 years ago

Allen, we do have errors for ratios but those are not always fit for purpose in terms of quantifying uncertainty, and in that circumstance, we would not want to mislead people. We had a similar discussion with Nickolay Rodionov last year with respect to U (ppm): we can propagate all the ratio-uncertainties through that calculation, and an unknown zircon with calculated U = 250 ppm probably has a numeric 1sigma somewhere between 2 ppm and 5 ppm. But that bears no resemblance to the real uncertainty, which is controlled by the true U content(s) of the chip of the concentration standard you are using on that mount (which we don't know and can't know). None of it matters much, though: it's a purely a relative measure, and very useful in that context, despite being uncertainty-free.

I think Ken Ludwig had the right idea: he helped people concerned about error propagation by supplying the NU-switch. But it can't do everything, and for people wanting to do more complex calculations (or who are not interested in propagating uncertainties to begin with), the FO-switch helps them get their job done.