lbl-anp / becquerel

Becquerel is a Python package for analyzing nuclear spectroscopic measurements.
Other
43 stars 16 forks source link

Precision issues of isotope_qty #358

Open cosama opened 1 year ago

cosama commented 1 year ago

The isotope quantity module does two things that are not great for numerical precision: It subtracts two very large numbers or multiplies very large numbers with very small once.

It is hard to get this right and the best (only?) way would be to move the internals to https://github.com/fredrik-johansson/mpmath.

Shouldn't be to much of a lift.

jvavrek commented 1 year ago

@cosama the tricky part is using mpmath with the uncertainties package https://github.com/lbl-anp/becquerel/pull/281#issuecomment-933968631

markbandstra commented 1 year ago

Could we use the standard library decimal class? It seems to provide arbitrary precision and works with numpy:

import decimal
import numpy as np

a = decimal.Decimal(100)
decimal.getcontext().prec = 30
print(a)
print(np.exp(a))
print(np.exp(float(a)))
100
2.68811714181613544841262555158E+43
2.6881171418161356e+43

And it avoids overflow/underflow:

a = decimal.Decimal(1000)
print(a)
print(np.exp(a))
print(np.exp(float(a)))
1000
1.97007111401704699388887935224E+434
RuntimeWarning: overflow encountered in exp
  print(np.exp(float(a)))
inf
a = decimal.Decimal(-1000)
print(a)
print(np.exp(a))
print(np.exp(float(a)))
-1000
5.07595889754945676529180947957E-435
0.0

(For reference, WolframAlpha gives exp(1000) = 1.9700711140170469938888793522433231253169379853238457899528... × 10^434 and exp(-1000) = 5.075958897549456765291809479574336919305599282892837361832... × 10^-435.)

markbandstra commented 1 year ago

Following up on a discussion with @cosama , if we are able to fix this problem by increasing to arbitrary precision, then the root cause of the IsotopeQuantity issues is the precision of Isotope.decay_const, which is calculated from Isotope.half_life, which read from a reference DataFrame here.

cosama commented 1 year ago

I agree a Decimal would work. I somehow thought Decimal are Fraction. Looks like Decimals are also faster than mpmath. We have to be careful to cast it back to float before using it with any quantity that could be a ufloat:

>>> uncertainties.ufloat(1, 1) * decimal.Decimal(1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for *: 'Variable' and 'decimal.Decimal'

while


>>> uncertainties.ufloat(1, 1) * float(decimal.Decimal(1))
1.0+/-1.0
```.