Closed den-run-ai closed 8 years ago
The uncertainties
package is meant to handle uncertainties like these, and even does this transparently when the uncertainties are correlated. In the particular case of the question, uncertainties would exactly give you the quoted uncertainty. If you apply the formula that transforms the measurements (with uncertainty) into the best (i.e. minimal chi-squared) estimate, you automatically get the uncertainty on the best estimate.
Does this answer your question?
@lebigot yes and no :)
is there any example I can follow for this particular combined uncertainty?
Maybe I don't understand the question: it looks to me like everything is explained in the documentation, so I am not sure what the blocking point is. Let me have a stab at it, though: the question on Stack Overflow considers a few random variables: we can take for example
>>> from uncertainties import ufloat
>>> v1 = ufloat(1, 0.1)
>>> v2 = ufloat(2, 0.2)
>>> v3 = ufloat(1.5, 0.3)
If you calculate their average, uncertainties
automatically calculates the uncertainty on the average:
>>> (v1+v2+v3)/3
1.5+/-0.12472191289246472
You can check that the formula from the question gives the same result (after you transform the incorrect denominator 1/N into the correct 1/N^2)—which you don't have to calculate, thanks to uncertainties
.
uncertainties
even transparently handles correlations: things like v1-v1
are always equal to 0, with no uncertainty (in this case the formula from the question does not apply, but uncertainties
does the right thing):
>>> v1-v1
0.0+/-0
Does this clear things up?
Yes, definitely! Now let me look at this closer myself.
On Tue, Feb 2, 2016, 11:09 AM Eric O. LEBIGOT (EOL) < notifications@github.com> wrote:
Maybe I don't understand the question: it looks to me like everything is explained in the documentation http://pythonhosted.org/uncertainties/, so I am not sure what the blocking point is. Let me have a stab at it, though: the question on Stack Overflow considers a few random variables: we can take for example
from uncertainties import ufloat v1 = ufloat(1, 0.1) v2 = ufloat(2, 0.2) v3 = ufloat(1.5, 0.3)
If you calculate their average, uncertainties automatically calculates the uncertainty on the average:
(v1+v2+v3)/3 1.5+/-0.12472191289246472
You can check that the formula from the question gives the same result—which you don't have to calculate, thanks to uncertainties.
uncertainties even transparently handles things like v1-v1, which is always equal to 0, with no uncertainty (in this case the formula from the question does not apply, but uncertainties does the right thing):
v1-v1 0.0+/-0
Does this clear things up?
— Reply to this email directly or view it on GitHub https://github.com/lebigot/uncertainties/issues/38#issuecomment-178693120 .
PS: The formula for the uncertainty in the question is incorrect: the denominator should be N^2, not N.
PPS: uncertainties
works with averages but also any kind of function (like sin(v1/v2)
).
What happens when uncorrelated uncertainties are combined?
Also your example does not seem correct, the combined value is not arithmetic average, the weights in the sum are based on corresponding uncertainties.
As I was writing, I was translating for you the math form the question, not the math from the answer that you just referred to. If you want to translate the math from Steve B's answer, again, as I was writing in my first answer above, you simply replace the average (v1+v2+v3)/3
by the linear combination of v1
, v2
and v3
that the usual chi^2 minimization method gives you. At this point, I can only recommend that you learn about chi^2 minimization and fitting values with uncertainty to a constant (this is a simpler case than linear regression with non-identical errors).
If you want to know in general how uncorrelated uncertainties are combined by uncertainties
(and by most uncertainty calculations done by hand), you must learn about uncertainty propagation.
Sorry to wake up a 9 years old issue, but I was too wondering about this subject, and I too would have liked if a unumpy.mean
function would have existed that would do the following:
$$ \mu = \frac{\sum_i (x_i/\sigma_i^2)}{\sum_i \sigma_i^{-2}}$$
$$ \sigma_\mu = 1/\sqrt{\sum_i \sigma_i^{-2}} $$
Would that be acceptable as a PR @lebigot?
I have three points regarding this:
1/2. Yes I meant the maximum likelihood that is also mentioned here. You are correct that the formula I wrote assumes the correlations are all $0$, and I'm definitely up to generalize it!
Here is the revised formulas for the uncertainty based on a complementary answer in the same SO Q&A, and with the same formula as before for the weighted mean:
$$ \mu = \frac{\sum_i (x_i/\sigma_i^2)}{\sum_i \sigma_i^{-2}}$$
$$\sigma\mu = \frac{\sqrt{\sum{i,j} \sigma_i^{-2} \sigma_j^{-2} \cdot Cov(x_i, x_j)}}{\sum_i \sigma_i^{-2}}$$
Where of course $Cov(x_i, x_i) = \sigma_i^2$.
I think that implementing the above is very non-trivial, and it is worth adding it therefor to uncertainties. I have also managed to generalized it to averaging over any set of axes in #265 .
Can this package handle uncertainty calculations like this?
http://physics.stackexchange.com/questions/57317/multiple-measurements-of-the-same-quantity-combining-uncertainties
If not, why?