Closed bouthilx closed 4 years ago
To answer your questions: 1) The reason we used a forked version of george is because we developed our own kernels for Fabolas and MTBO. The main repo has changed quite a bit and I do not know how much overhead it would to adjust to it. 2) While I don't think the implementation is tightly integrated with george, RoBO is actually not maintained anymore. I suggest to use emukit which has a more modular structure and contains more or less the same functionality as RoBO.
Hi Aaron!
I was planning to use emukit for PROFET (nice work by the way!), so I'd be happy to use it as well for BO.
Thanks!
Hi there!
I've been trying to pass a RNG to get reproducible results with default BO with GP and MCMC but there seems to be an issue. The suggestions vary even though the RNG is exactly in the same state. Here's a miminal example based on the tutorials
If I set
n_init=num_iterations=3
, results are all the same, so the initial design function seems to be fine.But with
n_init=3
andnum_iterations=4
I get widely different first predictions even though the RNGs are in same states.The issue is present with model_type='gp' as well.
After digging in the library I realized
george
does not take any seed is argument and rely on global RNG state. See for example here.Adding
np.random.seed(1)
in the loop does solve the issue for both GP and GP_MCMC but I would rather avoid global seeding.So I have 2 related questions:
george
? Would it be easy to port any modifications to main repo?george
or using supporting backend would be fairly simple?