google-research / tuning_playbook

A playbook for systematically maximizing the performance of deep learning models.
Other
26.29k stars 2.18k forks source link

question about the performance of tuning #47

Closed hahahouomg closed 5 months ago

hahahouomg commented 1 year ago

Hi, thanks for sharing this wonderful document.

I have two questions:

First, how would we know if the model already gets the best performance after trying different methods of tuning? It seems it's impossible to try infinite parameters. Is there any method that can quantify this so I can stop tuning?

Second, compared to some AutoML methods like bayesian optimization and Hyperparameter evolution, is there any advantage of tuning the model by ourselves using knowledge in this document?

Thanks very much

varungodbole commented 1 year ago

For the first question, you might be interested in this section. Although it doesn't entirely answer your question - https://github.com/google-research/tuning_playbook#how-many-trials-are-needed-to-get-good-results-with-quasi-random-search

Knowing when to stop tuning is generally a hard problem.

For your second question, you might find this section interesting. It talks about BayesOpt. https://github.com/google-research/tuning_playbook#after-exploration-concludes