chuckie82 / pysingfel

3 stars 7 forks source link

Repair/test photons -> ADU #55

Open AntoineDujardin opened 4 years ago

AntoineDujardin commented 4 years ago

The photon -> ADU transformation is unclear and requires a precomputed database.

I had a discussion with Haoyuan. He considers that photons produce electrons in a 5-by-5 pixel region around their hit point. The number of electrons is considered to be 130 and they are sampled according to a (truncated) Gaussian distribution in that 5-by-5 window.

Since his sampling method of taking 130 electron for each photon was extremely slow, he exported 1 million of pre-sampled distributions in a numpy dump file (the precomputed database). He then samples one of these 5-by-5 spreading kernels for each photon. In practice, I think we could use

np.random.multinomial(130, the_5_by_5_distribution_weights,
                      (number_of_photons_in_the_image,))

to generate our kernels.

The sigma value is unclear. The sampling also assumes that the photon is centered on one of the pixels.

To interpret the data, we can use psana's ADU -> photon functionality presented in: https://lcls-psana.github.io/Detector/index.html#module-AreaDetector

chuckie82 commented 4 years ago

5-by-5 spread will depend on the pixel size.

Is np.random.multinominal fast enough to execute on the fly? If not, we could calculate few thousand and keep in memory when we start our simulation

AntoineDujardin commented 4 years ago

For a 5-by-5 grids adding up 130 electrons, assuming 1,000,000 samples (photons), it takes 3.2 seconds, which is almost as much as the time needed to do everything else in the simulation.

On the other hand, the current method involves looping on each pixel of the image, taking photons (the number of photons at that specific pixel) precomputed distribution, and adding them all up. So, it's going to take time anyway.

AntoineDujardin commented 4 years ago

However, in a single pixel, the electrons from the different photons in that point are not differentiable. Thus, we can call np.random.multinomial(130 * n_phot, weights) at each pixel. That function is much faster.

To compare, if we assume 1,000,000 photons in the same pixel, then it only takes 21 µs (!), compared to 3.2 s if we want the separated photons.