Open marvin-hansen opened 5 years ago
This is really good, i will do implement for it
Thanks for the time & effort to dig deeper into this!
BTW, the optuna optimizer seems to faster and easier to use:
https://optuna.org/#key_features https://github.com/pfnet/optuna/tree/master/examples
Bayesian optimization can quickly become computationally expensive, just increase population, network size, or both. On the other hand, a lot of agent optimization can be done within the evolutional strategy, so why not generate a population, select the "elite" agent, the one with the highest score, and take that one as a blueprint to explore the parameter set further through mutation?
Example:
https://github.com/paraschopra/deepneuroevolution/blob/master/openai-gym-cartpole-neuroevolution.ipynb
The example above converges very fast to a global optimum (in about 50 generations), but more importantly, it requires a lot less computational power and thus allows super fast model re-generation.