Open peasant98 opened 3 years ago
Hi, the developer team thinks it is possible to merge such a new algorithm PR, and we would really appreciate such a contribution! To see how easy a specific PR could be merged beforehand, can you let us know what your PR would look like, especially in the following aspects?
Hello, I am a RL researcher, and my team and I have recently implemented HIRO (Data Efficient Hierarchical Reinforcement Learning with Off-Policy Correction) with PFRL. I'm wondering if a PR of an HRL algorithm (which required some large changes) would be encouraged on this platform.
Thanks!