Open TobyBoyne opened 2 weeks ago
This implementation assumes an ordering of the fidelities, which should be relaxed since TaskInput
does not require a fixed order (thanks for pointing this out @jpfolch!)
_ask
currently returns the predicted mean/std at the target fidelity - is this behaviour correct?
Hmm, I think that I would like _ask
to return predictions for the fidelity that is being proposed. But then this needs to be very clear in the output. Perhaps too confusing. Can we just return all of them? What do you think?
Things I'm not sure about:
- The way I handle
allowed
in the_ask
method seems suspicious. This seemed to me to be the easiest way to fix the fidelity at the target for the optimization, but I don't know if this is the best way.
I added a comment. But I see that you replied to yourself about the assumed ordering of the fidelities, which is obviously related (currently the target fidelity is the first one and then the fidelity descends, correct?) I think this is a sensible default, to be honest. But sure, perhaps we should explicitly store the ordering of the fidelities in the strategy to avoid confusion
Thanks for starting this PR
@R-M-Lee Thank you for your comments!
_ask
currently returns the predicted mean/std at the target fidelity - is this behaviour correct?Hmm, I think that I would like
_ask
to return predictions for the fidelity that is being proposed. But then this needs to be very clear in the output. Perhaps too confusing. Can we just return all of them? What do you think?
I would go for your first suggestion (return only predictions for the proposed fidelity). This is because the experiment will look something like the table below. To me it makes sense that you only give one prediction (since you are only carrying out one experiment), and fidelity
can be considered a feature of the experiment in the same way as x1
and x2
.
fidelity | x1 | x2 | y_pred | y |
---|---|---|---|---|
0 | 0.0 | 0.5 | 2.1 | 2.3 |
1 | 1.0 | 0.7 | 0.6 | 0.3 |
- The way I handle
allowed
in the_ask
method seems suspicious. This seemed to me to be the easiest way to fix the fidelity at the target for the optimization, but I don't know if this is the best way.I added a comment. But I see that you replied to yourself about the assumed ordering of the fidelities, which is obviously related (currently the target fidelity is the first one and then the fidelity descends, correct?) I think this is a sensible default, to be honest. But sure, perhaps we should explicitly store the ordering of the fidelities in the strategy to avoid confusion
I've replied to the comment. Also, I've changed the code so that we no longer assume an ordering of fidelities - we now support any fidelities in the form accepted by TaskInput
. That is, a list of fidelities that includes the elements [0, 1, ..., M]
in any order.
Implement strategy for optimizing functions with multiple fidelities, using MultiTaskGP.
This implementation is based on [Kandasamy et al. 2016, Folch et al. 2023]. We first optimize the target fidelity to obtain an input $x$, then select the lowest fidelity that gives a variance greater than some threshold. We use a MultiTaskGP to avoid the bias terms in [Kandasamy], and to enable transfer learning across tasks.
Still to do:
_ask
currently returns the predicted mean/std at the target fidelity - is this behaviour correct?Things I'm not sure about:
allowed
in the_ask
method seems suspicious. This seemed to me to be the easiest way to fix the fidelity at the target for the optimization, but I don't know if this is the best way.