araffin / robotics-rl-srl

S-RL Toolbox: Reinforcement Learning (RL) and State Representation Learning (SRL) for Robotics
https://s-rl-toolbox.readthedocs.io
MIT License
616 stars 91 forks source link

VecNormalize + Bug fixes #11

Closed araffin closed 6 years ago

araffin commented 6 years ago
hill-a commented 6 years ago

Also, it might be an idea to roll our own vector stacking wrapper in utils.py and add it to createEnvs. As openAI's version is still broken

To do later however

araffin commented 6 years ago

@NataliaDiaz I removed all the pytorch agent submodule and kept only the useful functions from it, we were not using it at all and was making duplicate code

NataliaDiaz commented 6 years ago

so where is this duplicate code now that was removed? or why we need a new repo outside this one for the agents? if they are unused agents that work, they may be used later, so just create a folder with test_agents or similar.

araffin commented 6 years ago

so where is this duplicate code now that was removed? or why we need a new repo outside this one for the agents? if they are unused agents that work, they may be used later, so just create a folder with test_agents or similar.

@NataliaDiaz we don't use them at all, and it complexify the code without any real need. Second, they are not as "standard" as OpenAI Baselines, so we are not sure that there no bug. They are also implementations of the same algorithms that are implemented in OpenAI Baselines.

I will certainly remove the repo outside, which is not used anymore as a submodule (i copied the two files we were still using inside the main repo).

Finally, if we want to re-use them (for any reason), they are still there: https://github.com/ikostrikov/pytorch-a2c-ppo-acktr (and we have git history for enjoy code)