Closed moralejo closed 2 months ago
This sounds more like all events are kept in Emory than a problem with the streams. Streams here is just the four parallel, right? Not subruns?
Memory usage increases steadily during execution, and for runs in which (e.g. due to a car flash) many pixels are kept in R0V it may go above 10G...
This also happens for the gain selection process (with lstchain_r0_to_r0g). Sometimes jobs spend way more than 10G, so they have to be run manually.
Ok, I think this might by a memory leak in protozifts then. Can you open an issue there? Will have a look tomorrow
I released protozfits 2.5.1 which should fix this memory leak. It should have stable memory usage after the first tile of data has been written.
https://pypi.org/project/protozfits/2.5.1/
conda-forge package will be there shortly
https://github.com/cta-observatory/cta-lstchain/blob/21d8f7ebd13bd677daeea232ef938e168edc6e48/lstchain/scripts/lstchain_r0g_to_r0v.py#L168-L169
@maxnoe , any idea of how we could empty the memory after processing each of the streams? Memory usage increases steadily during execution, and for runs in which (e.g. due to a car flash) many pixels are kept in R0V it may go above 10G...
It should be possible to empty whatever has been needed for the previous stream, when a new one is processed.