araffin / robotics-rl-srl

S-RL Toolbox: Reinforcement Learning (RL) and State Representation Learning (SRL) for Robotics
https://s-rl-toolbox.readthedocs.io
MIT License
611 stars 91 forks source link

[Feature Request] Tensorboard Support #32

Open araffin opened 6 years ago

araffin commented 6 years ago

As stable-baselines has now integrated tensorflow support, that could be cool to enable it in the toolbox repo (and to add it to the pytorch RL algo: SAC, CMA-ES, ARS).

huiwenzhang commented 5 years ago

Hi, To support tensorboard in stable-baselines, we need to define a tb log location for the RL agent. In this repo, arguments of the RL agent are defined by the train_kwargs, see here https://github.com/araffin/robotics-rl-srl/blob/1ab1bd366825f98f0282d05e32a3de0cbf7f0f9a/rl_baselines/train.py#L333 However, the train_kwargs only accept hyper-parameters of the specific rl agent. Alternatively, we can set the log args in the 'train' method of the agent, take ppo2 as an example, https://github.com/araffin/robotics-rl-srl/blob/1ab1bd366825f98f0282d05e32a3de0cbf7f0f9a/rl_baselines/rl_algorithm/ppo2.py#L59-L68 But it would be better to define the arg outside the specific agent, for example, the train file. So some small modifications may be done here.