Closed DLPerf closed 2 years ago
@nicrusso7 Hi, my friend, could you consider my issue asap?
Hi, the PPO code was originally forked from the pyBullet repo (https://github.com/bulletphysics/bullet3/tree/master/examples/pybullet/gym/pybullet_envs/minitaur/agents/ppo). More than happy to merge a PR that improves the performance :)
Following the PR thread.
Hello, I found a performance issue in the definition of
append
, rex_gym/agents/ppo/memory.py, tf.stack and tf.gather will be calculated repeatedly during program execution, resulting in reduced efficiency. I think it should be created before the loop(with) inappend
.Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.