lmfit / uncertainties

Transparent calculations with uncertainties on the quantities involved (aka "error propagation"); calculation of derivatives.
http://uncertainties.readthedocs.io/
Other
576 stars 74 forks source link

Zero nominal value wipes out errors? #92

Open nick-parker opened 5 years ago

nick-parker commented 5 years ago

Hi, my stats are a bit rusty so I apologize if this is correct math and I'm just confused...

ufloat(0,10)**2 gives me 0 +/- 0. That seems wrong. I know it's a bit silly to open an issue here because there's no way it's wrong in a popular package like this, but what am I missing here?

I stumbled upon this while trying to propagate errors through some fairly simple 3D math. I realized this package doesn't implement np.linalg.norm so I wrote it myself as umath.sqrt(v.dot(v)) but for basis vectors with minor variations eg [1 +/- 0.01, 0 +/- 0.005, 0 +/- 0.005] it was just giving back the exact uncertainty of the nominally nonzero component. Then I tried [1 +/- 0.01, 0 +/- 10, 0 +/- 10] and other very large values for the deviations on the 0 components to no effect. From a geometric perspective, it feels like the average length of that distribution of vectors should be much closer to 10 than 1.

Thanks, -Nick

lebigot commented 5 years ago

You’re right that (0+/-10)^2 should ideally not give an exact zero. Now, the uncertainties package performs linear error propagation, and we are looking at a quadratic function in zero, so it’s normal (albeit imperfect) that it return an exact zero (any non-zero nominal value would on the other hand give a good uncertainty as long as the square can be well approximated by a linear function over the uncertainty interval—meaning that higher-order contributions are small compared to the linear part).

This is in the documentation at https://uncertainties-python-package.readthedocs.io/en/latest/tech_guide.html#constraints-on-the-uncertainties, but I give the more complex example of cosine in zero instead of a square. Maybe I should use the example of the square early in order to clearly show the limitations of linear error propagation?

The documentation give some pointers to other libraries should you need a method that goes beyond linear errors.

In my eyes, the ability to correctly handle squares only pushes back the problem: cubes, etc., are incorrect. The only numerically perfect method I can imagine is the Monte-Carlo one, but it’s slow, and exponentially so in the number of variables involved. So the uncertainties package makes the choice to be approximate (which is much better than not handling uncertainties at all in calculations) and quite fast even with lots of variables.

I hope this helps!

nick-parker commented 5 years ago

That makes perfect sense, thanks for the clear answer!

lebigot commented 5 years ago

Thanks.

Just to be fully clear: the only problem with the square is for a zero nominal value. Squaring à nonzero nominal value with uncertainty gives a good result as long as the uncertainty is small, as described in the documentation.

PS: I’m keeping this issue open so as to make this more obvious in the documentation.