nabenabe0928 / reading-list

1 stars 0 forks source link

Multi-fidelity Bayesian Optimisation with Continuous Approximations #43

Closed nabenabe0928 closed 1 year ago

nabenabe0928 commented 1 year ago

Multi-fidelity Bayesian Optimisation with Continuous Approximations

The very first Multi-Fidelity optimization (MFO, multifidelity optimization) paper.

Main points

Although the authors deal with multiple fidelities in this paper, it seems multi-fidelity means the fidelity space has various choices of fidelities, and thus they call this multi-fidelity. It means that single fidelity or single-fidelity is the normal black-box optimization.

Basically, the choice of fidelity vectors is the main contribution of this paper and they analyze the simple regret bound based on the choice of fidelity vectors.

Eq. (7) is the choice and each term has the following intuitions:

  1. query from lower fidelities
  2. consider fidelities that have a larger variance than the threshold (for information gain purposes)
  3. limit the fidelities to consider based on the variance (in principle, similar to 2.)

About Point 3, when the variance $\beta_t$ goes to zero, $\mathcal{Z}_t$ becomes an empty set, and when $\beta_t$ goes to infinity, $\mathcal{Z}_t$ becomes the whole fidelity space $\mathcal{Z}$. It implies that if we have no information, we take the cheapest, and if we have so much info, we take the most expensive one.

The experiments used only synthetic functions and the visualization is only performance (simple regret) over time. The comparison was also only against various AF with GP.