Open bkj opened 5 years ago
I don't have the code in any repo, but the changes are trivial. The worker for the paramnet_surrogates has an argument sleep'. If you set that to true, the worker will sleep that time. You can launch many workers in the script by setting
background=True' for them.
But beware: The runs will actually take the time shown in the plots. The one with one worker, don't sleep as concurency is not an issue with a single worker. It's an embarassing sollution, but I didn't find a better one to simulate without the runs taking different times. You could let it sleep only a fraction of the time (like a tenth or so to speed things up, and still see the plots).
Hope that helps.
OK thanks. Is there a good example that shows BOHB training an actual model that runs in something like a few minutes? The MNIST examples in the docs presumably take several hours.
(Something like tuning an XGBoost model on a small-medium dataset seems like it might be reasonable.)
Sorry for the late reply. The MNIST is actually the basis for the analysis in example 6. Based on that, the run took about an our on my laptop without any GPU acceleration, but using 3 CPU cores. Depending on your hardware, that might finish significantly faster for you.
In the ICML paper, you have plots that show the performance of BOHB w/ varying number of workers.
I see the code to replicate the single-threaded experiments in the
icml_2018
branch -- but is there a code or an example that shows how we'd expect the system to scale w/ an increasing number of workers?Thanks!