cgobat / asymmetric_uncertainty

A package for handling numeric quantities with asymmetric uncertainties.
https://github.com/cgobat/asymmetric_uncertainty/wiki
GNU General Public License v3.0
18 stars 4 forks source link

Issues with true division #8

Closed sevenstarknight closed 1 year ago

sevenstarknight commented 1 year ago

I've got the following equation: asymmetricResults_M = 1/asymmetricResults_lambda

where asymmetricResults_lambda = a(2.2819067612897336e-05, 6.594071993855804e-05, 2.0414838882371667e-05) which is giving me positive max/min as expected,

but asymmetricResults_M is giving me a positive and negative max/min which is unexpected (~83028 Max, ~ -82813 Min). The value is correct however (43823).

It appears that the plus/minus aren't getting swapped as part of the division process, but I have no idea why as after reviewing your code that appears to be what it should be doing. Maybe because python is interpreting them as multiplication instead?

Any suggestions are welcome

sevenstarknight commented 1 year ago

follow up, using asymmetricResults_M = asymmetricResults_lambda**-1.0 for now; returns results as expected

cgobat commented 1 year ago

Glad you found a workaround while I look into this. For reference, the 1/asymmetricResults_lambda operation should trigger/call asymmetricResults_lambda.__rtruediv__(other=1). It seems unlikely that Python would interpret it as multiplication instead, but I will investigate.

sevenstarknight commented 1 year ago

Would you be ok if I built a put a branch together with a unit test?

cgobat commented 1 year ago

Absolutely! :)

sevenstarknight commented 1 year ago

So I don't think that errors should be flipped in division pos = np.sqrt((self.plus/self.value)2 + (other.minus/other.value)2) np.abs(result) neg = np.sqrt((self.minus/self.value)2 + (other.plus/other.value)2) np.abs(result)

Per Taylor, uncertainties in products and quotients propagate the same i.e. summation of the square of their relative errors.

sevenstarknight commented 1 year ago

same with


    def __sub__(self,other):
        if isinstance(other,type(self)):
            pass
        else:
            other = a_u(other,0,0)
        result = self.value - other.value
        pos = np.sqrt(self.plus**2 + other.minus**2)
        neg = np.sqrt(self.minus**2 + other.plus**2)
cgobat commented 1 year ago

Upon looking at this further, I figured out what's going on and I think this is actually just an issue of expectations. As far as I can tell, the division and subtraction operations behave as designed/intended. The errors get flipped because a larger positive error in the denominator, for instance, should result in a larger error in the negative direction on the quotient/result. Consider the following simplified case:

from asymmetric_uncertainty import a_u
a = a_u(2., 1., 0.1) # 2.0 (+1.0, -0.1)
1/a # = 0.5 (+0.025, -0.25)

This is the current behavior, and is how I would argue things should be. In a, a positive error of 1.0 with a negative error of only 0.1 indicates that the "true value" of the quantity is much more likely to be closer to 3 than to, say, 1. When we divide 1 by a, we should expect that the result is more likely to be less than 0.5 than above it—hence the result's larger error in the negative direction. We can see the same principle in effect with subtraction:

b = a_u(5., 3., 0.3) # 5.0 (+3.0, -0.3)
15 - b # = 10.0 (+0.3, -3.0)

This result should make perfect sense when you consider the fact that -b is -5.0 (+0.3, -3.0).

Think of it like this: when computing each of the two asymmetric errors during subtraction/division operations, we have to consider which components of which errors contribute to making the result bigger or smaller. For division, the magnitude of the positive error on the numerator and that of the negative error on the denominator both contribute to an increase in the potential size of the quotient (because a larger numerator or a smaller denominator both mean a larger result); likewise, the numerator's negative error and the denominator's positive error both serve to make the quotient smaller.


uncertainties in products and quotients propagate the same i.e. summation of the square of their relative errors

Sure, agreed. Errors are propagated using the same formula for both multiplication and division, involving the summation of the relative errors in quadrature. However, that doesn't tell us anything about what to do with asymmetric errors, and where to use each error component in the formula. In this case, errors in the positive or negative directions have different effects on the result depending on which operand they belong to.


While I believe that subtraction and division are working as intended, you actually did make me aware of the fact that the behavior of exponentiation using negative powers is not consistent with this. asymmetricResults_lambda**-1.0 should yield the same result as 1/asymmetricResults_lambda. I will open a new issue to address this.

I am going to close this issue, but feel free to respond/re-open it if this didn't clear it up.