Open esantorella opened 2 months ago
cc @SebastianAment re qLogEI having a "hole". The model actually seems fine here (thanks @esantorella for the great diagnostics), so this is probably just b/c the incumbent is so high (8.8638 in this case if I got that right from the other issue, by far the largest observed value).
As a first step I would recommend using qLogNoisyExpectedImprovement
here, which usually has better numerical behavior.
Thanks to ToennisStef for raising this in #2393.
🐛 Bug
I'm looking at an example with a
SingleTaskMultiFidelityGP
, evaluating acquisition values where both thex
and the objective are at fidelities other than the highest fidelity. This produces NaN acquisition values and causesoptimize_acqf
to error out. While optimizing for a fidelity other than the highest may not make sense, this also happens when optimizingqMultiFidelityKnowledgeGradient
for the highest fidelity. I'm seeing the following behavior:gpytorch/distributions/multivariate_normal.py:319: NumericalWarning: Negative variance values detected. This is likely due to numerical instabilities. Rounding negative variances up to 1e-10.
FixedFeatureAcquisitionFunction
, fixing the fidelity to 0 and using qLogEI, errors out.qMultiFidelityKnowledgeGradient
for the highest fidelity.What the posterior looks like:
acqf values if we were to just work with fidelity=0:
To reproduce
See gist for full code. It ends with
Alternatively, skipping the cost function setup, the same error can be produced more simply with
Stack trace/error message
Expected Behavior
Numerical inaccuracy is not uncommon in optimization; however, this typically should not lead to exceptions, since multi-restart optimization may allow for finding an optimum nonetheless. In this case, it is clear there is an optimum, so
optimize_acqf
should find it.System information
Please complete the following information: