nabenabe0928 / reading-list

1 stars 0 forks source link

Dynamic Ensemble of Low-fidelity Experts: Mitigating NAS "Cold-Start" #79

Closed nabenabe0928 closed 1 year ago

nabenabe0928 commented 1 year ago

Dynamic Ensemble of Low-fidelity Experts: Mitigating NAS "Cold-Start"

Train an ensemble of neural networks on low-fidelity observations to predict high-fidelity inputs.

Main points

  1. Each neural predictor is trained on each low-fidelity dataset (suppose we have a set of low-fidelity datasets)
  2. Fine tune each neural predictor using the predictors above and train an ensemble model jointly on the actual target dataset

The query at each iteration is simply the result obtained by evolution strategy on the ensemble performance predictor.

I am not sure how they integrate the target task weight, but it seems they are NOT (?) doing it.

Their experiments are completely out of context from HPO.