hill-a / stable-baselines

A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
http://stable-baselines.readthedocs.io/
MIT License
4.14k stars 723 forks source link

[Question] How can I initialize weights of MLP policy by some customized values? #1121

Closed zrz961203 closed 3 years ago

zrz961203 commented 3 years ago

My algorithms seem to be stuck in some local optimum, I wonder if it's possible to use some customized values to initialize MLP policy, for example, use some 0.1 for some layers. Thanks!

Miffyli commented 3 years ago

The easiest way is probably using get_parameters and load_parameters in the beginning to set the parameters to what you want.

PS: We recommend moving to stable-baselines3 for better support and easier-to-modify code :)