Open CompRhys opened 3 weeks ago
locally tests still failing for FAILED test/optim/test_optimize.py::TestOptimizeAcqf::test_optimize_acqf_runs_given_batch_initial_conditions - RuntimeError: expected scalar type Long but found Float
although not really sure how i've caused that
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 99.98%. Comparing base (
5d37606
) to head (b6f8b85
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
🚨 Try these New Features:
@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
Looks like the PR is out of sync with the main branch. Can you do another rebase please?
In this PR as stands I was confused about the roles of q and raw_samples in the initial_batch_generator
argument.
Yes this PR doesn't achieve what I intended. This was written against raw_samples but it should be against n_restarts to match the original intent.
@saitcakmak I redid this whole thing after realizing that I had got confused and failed to implement what I originally intended by following raw_samples
rather than num_repeats
and then got distracted focussing on the trees rather than the forest.
Apologies for linting..
Two solutions to the homotopy test failure: 1) either change the following linked lines to the snippet testing that we prune and then generate new restarts. https://github.com/pytorch/botorch/blob/5d3760633cae6d76b10df57e9f8478557e4e1946/test/optim/test_homotopy.py#L237-L246
for i in range(5):
# assert that the number of restarts are repopulated at each step
self.assertEqual(
prune_candidates_mock.call_args_list[i][1]["candidates"].shape,
torch.Size([4, 1]),
)
# assert that the candidates are pruned to just one candidate
self.assertEqual(
prune_candidates(**prune_candidates_mock.call_args_list[i][1]).shape,
torch.Size([1, 1]),
)
2) set shared_optimize_acqf_kwargs["raw_samples"]=None
at https://github.com/pytorch/botorch/blob/5d3760633cae6d76b10df57e9f8478557e4e1946/botorch/optim/optimize_homotopy.py#L194 to prevent the dimensions from the pruned restarts being filled with random designs in subsequent loops.
To get current behaviour the second option is better, but in principal despite the fact that random IC designs introduced later in the homotopy might not be able to be as readily optimized to sparsity the first option doesn't seem harmful (beyond there being no computational benefit to pruning) and may have upsides.
There is one remaining test failure and a rebase seems to be necessary. Lgtm otherwise
@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
Motivation
See https://github.com/pytorch/botorch/issues/2609
Have you read the Contributing Guidelines on pull requests?
Yes
Test Plan
Basic testing of the code is easy the challenge is working out what the run on implications might be, will this break people's code?
Related PRs
https://github.com/facebook/Ax/pull/2938