PhasesResearchLab / ESPEI

Fitting thermodynamic models with pycalphad - https://doi.org/10.1557/mrc.2019.59
http://espei.org
MIT License
65 stars 32 forks source link

Memory leak when running MCMC in parallel #230

Open bocklund opened 2 years ago

bocklund commented 2 years ago

Due to a known memory leak when instantiating subclasses of SymEngine (one of our upstream dependencies) Symbol objects (see https://github.com/symengine/symengine.py/issues/379), running ESPEI with parallelization will cause memory to grow in each worker.

Only running in parallel will trigger significant memory growth, because running in parallel uses the pickle library to serialize and deserialize symbol objects and create new objects that can't be freed. When running without parallelization (mcmc.scheduler: null), new symbols are not created.

Until https://github.com/symengine/symengine.py/issues/379 is fixed, some mitigation strategies to avoid running out of memory are:

bocklund commented 2 years ago

Code to restart the workers would look something like applying this patch to opt_mcmc.py, this would hard code the restart interval, but it could be added as a parameter and propagated through:

diff --git a/espei/optimizers/opt_mcmc.py b/espei/optimizers/opt_mcmc.py
index 4d08ccf..c167e9e 100644
--- a/espei/optimizers/opt_mcmc.py
+++ b/espei/optimizers/opt_mcmc.py
@@ -167,12 +167,19 @@ class EmceeOptimizer(OptimizerBase):
     def do_sampling(self, chains, iterations):
         progbar_width = 30
         _log.info('Running MCMC for %s iterations.', iterations)
+        scheduler_restart_interval = 50  # iterations
         try:
             for i, result in enumerate(self.sampler.sample(chains, iterations=iterations)):
                 # progress bar
                 if (i + 1) % self.save_interval == 0:
                     self.save_sampler_state()
                     _log.trace('Acceptance ratios for parameters: %s', self.sampler.acceptance_fraction)
+                if (self.scheduler is not None) and ((i + 1) % scheduler_restart_interval == 0):
+                    # Note: resetting the scheduler will reset the logger settings for the workers
+                    # You'd typically want to run the following, but the verbosity/filename are out of scope
+                    # self.scheduler.run(espei.logger.config_logger, verbosity=log_verbosity, filename=log_filename)
+                    self.scheduler.restart()
+                    _log.info(f"Restarting scheduler (interval = {scheduler_restart_interval} iterations)")
                 n = int((progbar_width) * float(i + 1) / iterations)
                 _log.info("\r[%s%s] (%d of %d)\n", '#' * n, ' ' * (progbar_width - n), i + 1, iterations)
         except KeyboardInterrupt:
bocklund commented 1 month ago

A potential mitigation is to cache symbols so new instances aren't created:

from pycalphad.core.cache import lru_cache

class StateVariable(Symbol):
    @lru_cache(maxsize=10000)
    def __new__(cls, *args, **kwargs):
        return super().__new__(cls, *args, **kwargs)

On recent development branches with parallelization in ESPEI turned off, the memory is appears constant (at least to MB resolution) once the cache is warm. Compared to without this mitigation, I'm getting an increase in about 1 MB/s of RAM. With parallelization on, memory usage grows both ways, but less so with this mitigation applied. Perhaps warming the cache with a runthrough of the likelihood function in an ESPEI run that happens before forking the process (spinning up the dask workers) would resolve it completely?

bocklund commented 1 month ago

I did some memory benchmarking on this for running ESPEI in parallel. Without the fix, both the parent and child processes grow in memory usage. The main process hit about 8GB after some number of steps, and the workers hit 4GB each. After applying the fix above, the parent process still gets up to 8GB, but the children stay relatively lean at 350MB, and grow much slower (after the same number of iterations). Applying another modification where we run predict before creating the dask local cluster (which forks processes), that seemed to make no impact on the memory usage compared to only using the symbol cache.

It's unclear why the main process seems to leak a constant amount when run in parallel, but at least there is evidence that this fix is meaningfully mitigating the memory leak from symbol creation