Open kslong opened 2 months ago
@jhmatthews is correct. The photons are shared among the threads, because NPHOT is adjusted. We therefore might want to consider re-implementing subcycles in situations where the memory is dominated by the photon bundles.
I was also surprised by the memory usage but it does seem like it really does get into Gigabytes once you have >1e7 photons per thread, and I had not been noticing this particularly. So yes, probably in the longer term it would be sensible to have subcycles, and think about whether there is any info in the structure that isn't needed/used.
(if we can live with this for now I'd suggest we wait til after code release for this though)
Agreed.
I am running a single cell model with a large number 1e9 photons on mulitple cores. Each core is taking 5.9GB, which seems excessive.
I think think the issue is that we seem to allocate memory for all 10*9 photons, even though with 24 cores, we should need to less than 5e7.
@Edward-RSE @jhmatthews - Am I right about this, and is it something we should fix.
As an aside, at one point in the past, python had subcycles for generating photons, where for example we generated 1e6 photons in each subcycle to get to a total of 1e7 photons before calculating the ionization. We got away from that as memory became cheaper, but that's an approach we could go back to if necessary.