Closed skuschel closed 6 years ago
Maybe it is sufficient to reduce the cache's keepalive time. It is set in datahandling.py
line ca. 100: fftw_cache.set_keepalive_time(3600)
. The default value was quite small, I just increased it because I guessed it may be a good idea. Maybe that guess was wrong...
It seems that the interface cache of pyfftw caches the PyFFTW objects and along with them all the data transformed. It is designed to reduce the (python)-overhead when creating a new PyFFTW object:
https://hgomersall.github.io/pyFFTW/pyfftw/interfaces/interfaces.html#pyfftw.interfaces.cache.enable
To me this seems beneficial only, when multiple fft with identical grid numbers and data type are performed, which is a VERY specialized use case to save VERY little time on each transformation. Thats why the default time is set to 0.1sec.
It seems from the documentation, that the planner wisdom is cached anyways (which is the big timesaver). However, that may be a good idea to check first.
Using the
experimental.kspace_propagate_adaptive
continuously increases memory usage while propagating over multiple steps up to more than 60GB of memory. It turned out, that addingbefore the propagation loop fixed the problem and the memory consumption was kept constant at around 5GB.