DiamondLightSource / adcorr

A collection of pure python functions for performing area detector corrections
Apache License 2.0
0 stars 0 forks source link

Frames averager can output additional uncertainty estimates #121

Open toqduj opened 2 years ago

toqduj commented 2 years ago

I'm assuming Frames averager propagates uncertainties...

However, in addition, it can output not just the mean but also the standard deviation, or (🍻) the standard error on the mean. Those actual uncertainties can be used to replace or be compared with the theoretical uncertainties at a later stage.

garryod commented 2 years ago

You're correct, if average_all_frames is passed an array of numcertain.uncertain values then the resultant frame will have the correct attributed uncertainties.

Am I correct in understanding that in addition to outputting the averaged values you would like to output the standard deviation / variance of the input frames? This would make for a trivial addition which I am happy to do, but I am unsure as to how you intend to use these downstream, could you possibly elaborate on this a bit further?

toqduj commented 2 years ago

Your understanding is correct. We initially only start with (Poisson) counting statistics-based uncertainty estimates, and propagate these. Poisson statistics are the smallest possible uncertainty estimate. However, there can be other sources of uncertainty that bring the actual uncertainty up.

These actual uncertainties can occasionally be determined, for example by comparing multiple ostensibly identical frames. we then get additional estimates for the uncertainties, for example by looking at the SEM or STD.

So we end up with multiple uncertainty arrays. These can be used later on, either for:

  1. estimating a new "user-facing" uncertainty array which is closer to the actual practical uncertainties, by combining the multiple uncertainty arrays in a clever way. I usually estimate this new array as the maximum of the available uncertainties, and to be no less than 1% of the intensity (as this is realistically the smallest effect magnitude of our extensive corrections). Though this 1% is rather arbitrary, it has held up well for subsequent data analysis further down the line.
  2. comparing uncertainties. If we have segments of the detector where the practical uncertainty is consistently larger than the Poisson uncertainty, we have an unknown additional source of noise in there, and some instrumental troubleshooting to do.

so as indicated in #123 , we would need to be able to add a list of uncertainties to each dataset, rather than trying to encapsulate all uncertainties in a single array.

toqduj commented 2 years ago

Blocked by https://github.com/DiamondLightSource/numcertain/issues/77