acerbilab / pyvbmc

PyVBMC: Variational Bayesian Monte Carlo algorithm for posterior and model inference in Python
https://acerbilab.github.io/pyvbmc/
BSD 3-Clause "New" or "Revised" License
114 stars 6 forks source link

Resume optimization process #110

Closed pipme closed 1 year ago

pipme commented 1 year ago
Bobby-Huggins commented 1 year ago

Thanks for this @pipme! Everything looks correct to me, I just added I couple things:

  1. I pushed a small change here to allow users to continue an optimization that has already terminated (with e.g. a larger function evaluation budget) as opposed to one that was pickled while still in-progress.
  2. I also moved the call to logger.basicConfig() back to the top of vbmc.__init__(). I know it makes more sense in vbmc._init_logger(), but if any warnings are issued before that (e.g. when validating options) then Python calls basicConfig() anyway but we loose our custom formatting.
  3. I added a test to make sure that stopping and resuming optimization gives similar results to running all the way through.
    • For some reason I couldn't get identical results, even with the same seed. Not sure why, though I've noticed this before in casual testing. But I think this test is good enough for its purpose.

If that makes sense to you then I think it's good to merge.

pipme commented 1 year ago
  1. I added a test to make sure that stopping and resuming optimization gives similar results to running all the way through.

    • For some reason I couldn't get identical results, even with the same seed. Not sure why, though I've noticed this before in casual testing. But I think this test is good enough for its purpose.

There are two reasons:

  1. Final boosting affects numpy random state
  2. Ransom seed should also be the same when we call VBMC.__init__() since the initialization of variational posterior also affects numpy random state

I make another pull request #111 for fixing.