The generation of random NSB p.e.'s to be added to waveforms in the R0 to DL1 stage of MC is currently done "on the fly" for every event & pixel. That takes a significant part of the total cpu for the case of nigh NSB - for 16 * dark we saw like a factor 6 slower processing.
One could just generate, at the beginning of the r0_to_dl1 execution, a database to be kept in memory of a few thousand waveforms containing different realizations of the additional NSB, and just pick one at random from there for every pixel. In the proton simulations we have ~800 events, so if we just simulate, say, 10*1855 waveforms and use them shuffled, that would be a factor 80 less effort used in the noise waveform production.
The generation of random NSB p.e.'s to be added to waveforms in the R0 to DL1 stage of MC is currently done "on the fly" for every event & pixel. That takes a significant part of the total cpu for the case of nigh NSB - for 16 * dark we saw like a factor 6 slower processing.
One could just generate, at the beginning of the r0_to_dl1 execution, a database to be kept in memory of a few thousand waveforms containing different realizations of the additional NSB, and just pick one at random from there for every pixel. In the proton simulations we have ~800 events, so if we just simulate, say, 10*1855 waveforms and use them shuffled, that would be a factor 80 less effort used in the noise waveform production.