Closed alecmdunton closed 1 year ago
Something weird is happening with bayes-opt
in MPI mode. We're somehow triggering an internal error?
The problem appears to be related to one of the recent releases. They must have added some logic that violates how we are using the library. We are getting the error:
Traceback (most recent call last):
File "/home/runner/work/MuyGPyS/MuyGPyS/tests/mpi_correctness.py", line 1070, in test_bayes_optimize
model_m = bayes_optimize_m(
File "/home/runner/work/MuyGPyS/MuyGPyS/MuyGPyS/_src/optimize/chassis/numpy.py", line 111, in _bayes_opt_optimize
optimizer.maximize(**maximize_kwargs)
File "/opt/hostedtoolcache/Python/3.10.9/x64/lib/python3.10/site-packages/bayes_opt/bayesian_optimization.py", line 311, in maximize
self.probe(x_probe, lazy=False)
File "/opt/hostedtoolcache/Python/3.10.9/x64/lib/python3.10/site-packages/bayes_opt/bayesian_optimization.py", line 208, in probe
self._space.probe(params)
File "/opt/hostedtoolcache/Python/3.10.9/x64/lib/python3.10/site-packages/bayes_opt/target_space.py", line 239, in probe
self.register(x, target)
File "/opt/hostedtoolcache/Python/3.10.9/x64/lib/python3.10/site-packages/bayes_opt/target_space.py", line 196, in register
raise NotUniqueError(f'Data point {x} is not unique. You can set "allow_duplicate_points=True" to '
bayes_opt.util.NotUniqueError: Data point [0.1] is not unique. You can set "allow_duplicate_points=True" to avoid this error
It looks like we could simply add all_duplicate_points=True
to our relevant invocation to avoid the issue, but it might be worth investigating because it seems like the optimizer is somehow investigating the same point multiple times?
Something weird is happening with
bayes-opt
in MPI mode. We're somehow triggering an internal error?
I am glad this wasn't me - this error was confusing me a lot. I wonder why the optimizer is doing that...must be a really flat objective function?
I think that it is happening in a context where we are optimizing a garbage function just to verify that the computation runs without crashing. Probably the most disciplined thing to do would be to add the relevant kwarg to those calls in the test harness, not hard-code it into the wrapper.
Do you think you can do that?
Do you think you can do that?
Yep! I'll take care of it.
Do you think you can do that?
I think I need to edit the _src.optimize.chassis to deal with this. Currently optimizer_kwargs doesn't allow "allow_duplicate_points" as a kwarg:
optimizer_kwargs = { k: kwargs[k] for k in kwargs if k in { "random_state", "verbose", "bounds_transformer", } }
CI harness is running - MPI timing out on previous PR.