pytorch / botorch

Bayesian optimization in PyTorch
https://botorch.org/
MIT License
3.11k stars 406 forks source link

Add ability to mix batch initial conditions and internal IC generation #2610

Open CompRhys opened 3 weeks ago

CompRhys commented 3 weeks ago

Motivation

See https://github.com/pytorch/botorch/issues/2609

Have you read the Contributing Guidelines on pull requests?

Yes

Test Plan

Basic testing of the code is easy the challenge is working out what the run on implications might be, will this break people's code?

Related PRs

https://github.com/facebook/Ax/pull/2938

CompRhys commented 1 week ago

locally tests still failing for FAILED test/optim/test_optimize.py::TestOptimizeAcqf::test_optimize_acqf_runs_given_batch_initial_conditions - RuntimeError: expected scalar type Long but found Float although not really sure how i've caused that

codecov[bot] commented 1 week ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 99.98%. Comparing base (5d37606) to head (b6f8b85).

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #2610 +/- ## ======================================= Coverage 99.98% 99.98% ======================================= Files 196 196 Lines 17362 17395 +33 ======================================= + Hits 17360 17393 +33 Misses 2 2 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.


🚨 Try these New Features:

facebook-github-bot commented 1 week ago

@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

saitcakmak commented 1 week ago

Looks like the PR is out of sync with the main branch. Can you do another rebase please?

CompRhys commented 6 days ago

In this PR as stands I was confused about the roles of q and raw_samples in the initial_batch_generator argument.

Yes this PR doesn't achieve what I intended. This was written against raw_samples but it should be against n_restarts to match the original intent.

CompRhys commented 5 days ago

@saitcakmak I redid this whole thing after realizing that I had got confused and failed to implement what I originally intended by following raw_samples rather than num_repeats and then got distracted focussing on the trees rather than the forest.

CompRhys commented 4 days ago

Apologies for linting..

Two solutions to the homotopy test failure: 1) either change the following linked lines to the snippet testing that we prune and then generate new restarts. https://github.com/pytorch/botorch/blob/5d3760633cae6d76b10df57e9f8478557e4e1946/test/optim/test_homotopy.py#L237-L246


        for i in range(5):
            # assert that the number of restarts are repopulated at each step
            self.assertEqual(
                prune_candidates_mock.call_args_list[i][1]["candidates"].shape,
                torch.Size([4, 1]),
            )
            # assert that the candidates are pruned to just one candidate
            self.assertEqual(
                prune_candidates(**prune_candidates_mock.call_args_list[i][1]).shape,
                torch.Size([1, 1]),
            )

2) set shared_optimize_acqf_kwargs["raw_samples"]=None at https://github.com/pytorch/botorch/blob/5d3760633cae6d76b10df57e9f8478557e4e1946/botorch/optim/optimize_homotopy.py#L194 to prevent the dimensions from the pruned restarts being filled with random designs in subsequent loops.

To get current behaviour the second option is better, but in principal despite the fact that random IC designs introduced later in the homotopy might not be able to be as readily optimized to sparsity the first option doesn't seem harmful (beyond there being no computational benefit to pruning) and may have upsides.

saitcakmak commented 3 days ago

There is one remaining test failure and a rebase seems to be necessary. Lgtm otherwise

facebook-github-bot commented 3 days ago

@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.