Open jbhopkins opened 4 years ago
Hi Jesse, I am not sure this is a bug ... I talked to many people and most of them doubt that measuring the variance this way is correct. I should raise a NotImplementedError for now to be explicit and I need to re-think on the azimuthal variance.
This feature may be available via the sigma-clipping, but that's in the future, probably not before the end of the year.
Jerome,
As I understand it, the azimuthal method is calculating the standard deviation of the mean (aka the standard error). While I agree that if you have a photon counting detector it's probably best to use the poisson model for error, if you have an integrating detector that obviously doesn't work. While I know that integrating detectors are less common than photon counting detectors, they still exist and are used for experiments.
Just a couple of examples of this off the top of my head. First, as I understand it most XFEL detectors are integrating, because photon counting detectors can't count photons arriving in the small bunch time. This is small compared to the number of synchrotron experiments, but growing and important. Second, my beamline has a Mar CCD that we use for certain experiments that require small pixel sizes to resolve narrow diffraction peaks. I suspect there are plenty of others who break these older detectors out for occasional experiment.
I'd strongly advocate for leaving the capability in (or adding it to the new integrator), though I know that's probably a lot of work on your side.
Hi Jesse,
I know integrating detectors are striking back ... we have Jungfrau detectors here and I need to implement the error propagation for those detectors as well.
My point was rather that measuring the deviation along a ring was giving error-bars which were off by several orders of magnitude (too small if I remind well).
It would be interesting to check in this tutorial where data are synthetic (and the underlying distribution known) if the error model "azimuthal" was giving the proper error-bars or not. I suspect not but did not check yet. If the results are wrong, it would validate the "removal" of this feature. If the error-bars are of similar intensity, the method is scientifically validated.
To come back on integrating detectors, they have read-out noise and gain that we need to characterize with their associated deviation. This gives a per pixel similar to "sigma² = readout_sigma² + gain*(signal-noise)" which may be propagated the same way as in the case of a poissonan detector. I already implemented things like this for the TruSAXS beamline at ESRF.
Apparently, when data are actually synthetic, the two approaches look equivalent ...
So Jesse, you are right, we need to keep the option error_model="azimuthal", and implement them all (ain't gonna be funny). This makes me wonder how to deal with heavily tilted detectors or other use-cases where the efficiency varies a lot from pixel to pixel (still unlikely in SAXS).
I'll keep this issue open until a solution has been found.
Jerome, thanks for looking into it. While I agree that your approach of detector specific corrections is probably best when possible, it seems that keeping the azimuthal calculation around for cases when the detector specific details may not be known is a useful thing.
I am trying to find a configuration where a detector is tilted by ~45° ... this will ensure the solid angle within a ring varies enough to have significant impact in the measured intensity and check if the "azimuthal" error-model can still be applicable to estimate the variance.
To investigate, the geometry of fit2d helps. One can set the tilt to ~70°, the beam being centered on the detector and used this as a starting point.
pyFAI: 0.18 (see #1331, 0.19 not working with new integrator) python: 3.7.4 OS: MacOS 10.14.6 Installation method: conda (main channel and conda forge)
When I use the "azimuthal" error model with the new 1d integrator it gives the following error:
Full code to reproduce: