Open anotherjoshsmith opened 6 years ago
Looks like this problem can be solved using dask. Where I was operating on multiple arrays with 600 million elements, i can now use dask arrays to break calculations into chunks. Also getting better CPU utilization with dask :)
The simple changes require to convert calculations from numpy to dask.array can be found in db48b8b.
The memory usage currently blows up for large PBMetaD projects when I try reweighting. It looks like my naive implementation of the
reconstruct_bias_potential
method for thePBMetaDProject
class is the culprit. Memory management is the major bottleneck preventing the use of plumitas for production-scale projects right now, so I need to figure this part out as soon as possible. A few options to explore are: