Harvard-Neutrino / prometheus

GNU Lesser General Public License v2.1
16 stars 8 forks source link

Why photon propagation simulation does not grow linearly with neutrino energy #26

Open Adiolph opened 1 year ago

Adiolph commented 1 year ago

Hi, thanks for bringing this amazing project to the community!

I'm curious about the simulation speed specifically in the photon propagation part. I noticed in Figures 6 and 7 of the paper that the execution time doesn't appear to increase linearly as the neutrino energy increases. This is especially relevant for simulating extremely high energy events beyond 10 PeV.

I was wondering if you could share some insights on how you achieve this. Do you use PPC to perform a full ray tracing simulation initially, and then use a neural network to parameterize the "number of hits" and "arrival time distribution" later on? This would mean that the trained neural network wouldn't strongly depend on the deposited energy.

MeighenBergerS commented 1 year ago

Hi!

The answer differs depending on Fig. 6 (ice) or Fig. 7 (water).

For ice, we run PPC and don't parameterize hits. I don't believe we have tested the exact reason why we see this scaling. If I remember correctly, though, ray-tracing on CPUs vs GPUs scales differently. Should we run a similar test on CPUs, we would get a scaling behavior closer to linear.

For water, we don't use PPC. We use two codes called Fennel and Hyperion. In this case, we first generated full ray-tracing simulations for different positions relative to a detection module. Then we parametrized these into tables and re-used those by sampling them. So, even though Fig. 7 is generated using CPUs, the scaling behaves differently than expected due to the approach. This would be closer to the NN approach you mentioned. Note, that water is a far simpler medium than (real) ice, so sampling is far more manageable.

Hope this clears things up!

Adiolph commented 1 year ago

Thanks! It makes sense for the water case. Using some parametrization method can efficiently reduce the run time.

However, I'm not as confident about the ice case. For instance, let's say we have an event that deposits 100 TeV of energy and generates approximately 2e10 photons. In this case, the number of photons would exceed the number of threads on a GPU (about 100 SMs times 32 threads per SM), which means that the run time will likely be proportional to the deposited energy.

I did notice that in the PPC paper, the simulation speed was approximately 1 ns per photon (Figure 7). For energy below 100 TeV, the actual photon propagation time might be too small compared to other parts (maybe data transformation between CPU and GPU or GPU kernel initialization). Do you think it's likely that the simulation time will scale linearly as the energy goes beyond 100 TeV?

Adiolph commented 1 year ago

I was wondering if you have any plots or references that compare the photon parameterization results with the actual photon propagation results? This is the part that I'm particularly interested in, as it could potentially liberate photon propagation simulation from GPUs (which are more in demand than ever now).

MeighenBergerS commented 1 year ago

Hi!

Yes, the simulation time increases by something closer to linear at higher energies. The plots are miss-leading in that regard since they are such an extreme log scale.

I don't have any on hand, but I'd be glad to generate some. In the case of water, the simulation is far simpler since the scattering length vs attenuation length is something close to 150 m vs 30 m. So most photons which undergo scattering will be attenuated/have attenuated before reaching a detection module. This usually means one can cut all photons which scatter more than 3 times (a switch in the code). This makes it far less expensive to model individual photons compared to ice (where scattering is ~30m and attenuation ~150m). Sampling analytical distributions are already very close to what you get from MC simulations. Especially at high energies, where the statistics are large enough.

Sillyringo commented 11 months ago

Hi!

The answer differs depending on Fig. 6 (ice) or Fig. 7 (water).

For ice, we run PPC and don't parameterize hits. I don't believe we have tested the exact reason why we see this scaling. If I remember correctly, though, ray-tracing on CPUs vs GPUs scales differently. Should we run a similar test on CPUs, we would get a scaling behavior closer to linear.

For water, we don't use PPC. We use two codes called Fennel and Hyperion. In this case, we first generated full ray-tracing simulations for different positions relative to a detection module. Then we parametrized these into tables and re-used those by sampling them. So, even though Fig. 7 is generated using CPUs, the scaling behaves differently than expected due to the approach. This would be closer to the NN approach you mentioned. Note, that water is a far simpler medium than (real) ice, so sampling is far more manageable.

Hope this clears things up!

It seems that water is not mentioned in the document of PPC, so I am wondering whether it is possible to perform ray tracing with GPU in water?