Open bheijden opened 7 months ago
I believe there's ongoing discussion on this for CleanRL, though I've not caught up with the latest.
https://github.com/vwxyzjn/cleanrl/issues/198
My understanding is that properly handling this does not usually result in significant performance differences.
https://github.com/sail-sg/envpool/issues/194#issuecomment-1317171243
That being said, if you would be interested in doing a PR for this with another file (say, ppo_time_limits.py
), that would be great!
Thanks, that clears things up. Wasn't sure if it was perhaps handled elsewhere.
Concerning the ablation: It looks like those benchmarks were done using Atari games, which, as far as I understand, aren't impacted by truncation—they usually just end or terminate. Truncation is more about situations where you have endless tasks, which is common in robotics scenarios like with the Ant or Cheetah. So, I'd be cautious about basing any conclusions solely on studies from Atari games. In fact, there are simpler settings that absolutely require proper truncation management to be solved, like the example from Time Limits in RL (arXiv) in the infinite horizon case:
If I end up requiring truncation, I'll see if I can cook up a PR.
That's a good point! I think this could be worth doing in a separate file so people can see the differences. There is a significant downside of doubling the observation size.
Hi,
The current PPO implementation does not seem to account for time limits. While the
EpisodeWrapper
from brax is used, which tracks a truncation flag (source) in the info dictionary for correct termination handling, it appears this aspect is overlooked in the implementation.Related Information:
Could this be an oversight, or am I missing a part of the implementation that addresses this?