Open fzeiser opened 4 years ago
I must have been pretty confused when writing the above. Of course we can't separate the data into n arbitrary time steps; this will apparently correlate uncorrelated events -> the expected correlation is always 0. However, we can correlate on an event by event basis (+ appropriate binning).
I was thinking about how we could include not only the std. deviation but also correlations in the oslo-method type particle-gamma coincidence experiments. Eventhough the counting process is governed by the Poisson statistics, we should in general expect correlations if we measure the decay of a nucleus from a specific excitation energy. These have been neglected in the analysis so far.
If one starts at a given excitation and observes the decay (to the groundstate), the decay can proceed in different cascade; which cascades are favored depends (in the statistical regime) on the nuclear level density and gamma-ray strength function. As an example, from 10 MeV, once could decay through 8 gammas of 1 MeV and one gamma of 2 MeV - or a sequence of [2, 2, 2, 1, 3] MeV gammas.. Depending on which of these cascades is favored in decay, one will get different covariance between counts of gamma rays at 2 MeV and other energies. So a usual assumption of independent Poisson distributions for each bin is not fulfilled.
I think we could try to recover the covariance of splitting up our data in n time steps. Then we analyze the covariance between all gamma-ray energy bins. It might be challenging in cases where we do not have a lot of data, but for many experiment we have a lot of data over a large range. -- Is there any good measure on the uncertainty of covariance calculations? It might also turn out that we have such a strong mixing of different cascades, that the correlations are very small (apart from correlations within/due to the detector resolution].
A short example by code showing this problem: