acerbilab / pyvbmc

PyVBMC: Variational Bayesian Monte Carlo algorithm for posterior and model inference in Python
https://acerbilab.github.io/pyvbmc/
BSD 3-Clause "New" or "Revised" License
114 stars 6 forks source link

`FunctionLogger.Xn` doesn't represent number of training inputs #74

Closed pipme closed 2 years ago

pipme commented 2 years ago

In python self.Xn: int = -1 # Last filled entry while in matlab it starts from 0 due to difference in indexing. This causes a problem when we want to get current number of training inputs by

 self.optim_state["N"] = self.function_logger.Xn

Two solutions: either we replace the above with self.optim_state["N"] = self.function_logger.Xn + 1 or we set self.Xn: int = 0 and modify the related indexing parts in FunctionLogger.

What do you think? @Bobby-Huggins @lacerbi

Bobby-Huggins commented 2 years ago

Hi Chengkun, thanks for catching this! My instinct is that function_logger.Xn should remain as the index of the most recently evaluated point, and that we should correct this issue by setting self.optim_state["N"] = self.function_logger.Xn + 1, though I can see an argument either way.

I'll correct this on the branch nan-bug-function-logger and make sure it works, but it should be a simple fix. Can I ask how you noticed the issue, so I can keep an eye out for similar subtle bugs?

pipme commented 2 years ago

Hi Bobby, thanks for fixing that. Yes maybe it's better to simply add 1 to Xn. I noticed it when I try to implement something depending on the number of training points. Later I can add another variable like self.n_train = self.Xn + 1 to FuntionLogger if needed. But for now I think this simple fix is enough : )

pipme commented 2 years ago

Resolved in #75 .