nabenabe0928 / reading-list

1 stars 0 forks source link

Tuning hyperparameters without grad students: Scalable and robust bayesian optimisation with dragonfly #61

Closed nabenabe0928 closed 1 year ago

nabenabe0928 commented 1 year ago

Tuning hyperparameters without grad students: Scalable and robust Bayesian optimisation with dragonfly

dragonfly

Main points

Experiments

Baselines

  1. random search
  2. HyperOpt
  3. SMAC
  4. Spearmint
  5. GPyOpt
  6. Dragonfly
  7. PDOO (parallel deterministic optimistic optimisation)

Benchmarks (mostly functions except some self-made real-world examples)

  1. Branin ($d = 2$)
  2. Hartmann3 ($d = 3$)
  3. Park1 ($d = 4$, modified version as well)
  4. Park2 ($d = 4$, modified version as well)
  5. Hartmann6 ($d = 6$, modified version as well)
  6. Borehole ($d = 8$, modified version as well)

Although the manuscript says the functions above, apparently they used something more (especially, I am not sure about the dimensionality)

Performance over time