facebookresearch / alebo

Re-Examining Linear Embeddings for High-dimensional Bayesian Optimization
Other
41 stars 5 forks source link

quickstart.ipynb running error #1

Open BrunoQin opened 3 years ago

BrunoQin commented 3 years ago

python: 3.7.9 ax-platform: 0.1.17 When I run the quickstart.ipynb, there is an error in the iteration 6: [ERROR 11-10 10:26:28] ax.service.managed_loop: Encountered exception during optimization: Traceback (most recent call last): File "C:\Users\Bruno\Anaconda3\envs\python37\lib\site-packages\ax\service\managed_loop.py", line 170, in full_run self.run_trial() File "C:\Users\Bruno\Anaconda3\envs\python37\lib\site-packages\ax\service\managed_loop.py", line 148, in run_trial experiment=self.experiment File "C:\Users\Bruno\Anaconda3\envs\python37\lib\site-packages\ax\modelbridge\generation_strategy.py", line 384, in gen keywords=get_function_argument_names(model.gen), File "C:\Users\Bruno\Anaconda3\envs\python37\lib\site-packages\ax\modelbridge\base.py", line 627, in gen model_gen_options=model_gen_options, File "C:\Users\Bruno\Anaconda3\envs\python37\lib\site-packages\ax\modelbridge\array.py", line 226, in _gen target_fidelities=target_fidelities, File "C:\Users\Bruno\Anaconda3\envs\python37\lib\site-packages\ax\modelbridge\torch.py", line 211, in _model_gen target_fidelities=target_fidelities, File "C:\Users\Bruno\Anaconda3\envs\python37\lib\site-packages\ax\models\torch\alebo.py", line 660, in gen model_gen_options=model_gen_options, File "C:\Users\Bruno\Anaconda3\envs\python37\lib\site-packages\ax\models\torch\botorch.py", line 371, in gen **optimizer_options, File "C:\Users\Bruno\Anaconda3\envs\python37\lib\site-packages\ax\models\torch\alebo.py", line 490, in alebo_acqf_optimizer base_X_pending = acq_function.X_pending # pyre-ignore File "C:\Users\Bruno\Anaconda3\envs\python37\lib\site-packages\torch\nn\modules\module.py", line 779, in getattr type(self).name, name)) torch.nn.modules.module.ModuleAttributeError: 'ExpectedImprovement' object has no attribute 'X_pending'

Can you give me some advise?

ksehic commented 3 years ago

I have noticed the same bug that seems to be related to torch and alebo. If you select the total opt number larger than the initial number of samples, the bug would appear and it would crash the optimization. I have not figured out how to solve it.

Also, is "the initial number of samples" related to the DoE (design of experiments) of BO?

Thanks, @bletham

EDIT: This bug that I have mentioned is only related to d=1. As I can see once I use larger d, it works with the warning that is mentioned in the first comment "UserWarning:The .grad attribute of a Tensor that is not a leaf Tensor is being accessed."

d=1 bug message

"miniconda3/lib/python3.8/site-packages/botorch/optim/parameter_constraints.py", line 268, in _make_linear_constraints raise ValueError("indices must be at least one-dimensional") ValueError: indices must be at least one-dimensional"

bletham commented 3 years ago

@BrunoQin sorry for the slow reply, I hadn't noticed the issue until now. The problem that you're running into is a bug that I fixed here: https://github.com/facebook/Ax/commit/a87a72d32e978775e21b13fbf23da5f1e625b10e. If you install the latest version of Ax (0.1.19) you will get the fix and it will work.

bletham commented 3 years ago

@ksehic It looks like in recent versions of torch there is a warning as you note about accessing the .grad of a non-leaf tensor. I'm not sure where this is coming from but it so far seems to be innocuous. I haven't dived deeply into it yet though.

For the failure with d=1, this is a bug. It is happening while constructing the linear constraints, which involves constructing a (D, d) array. I'm guessing that with d=1, the (D, 1) array is somewhere being cast into a (D,) array which is what is breaking things. This will have to be tracked down and fixed. But in the meantime, given the results in the paper about the significant benefits to using an embedding dimension higher than the true subspace dimension, I'm not sure how much sense it makes to actually use d=1. Even for a really tight evaluation budget it seems like d=2 would be reasonable, and likely a much better subspace.

ksehic commented 3 years ago

Thanks, @bletham it makes sense.