leggedrobotics / legged_gym

Isaac Gym Environments for Legged Robots
Other
1.25k stars 361 forks source link

How are episode rewards calculated? #8

Closed zita-ch closed 2 years ago

zita-ch commented 2 years ago

I tried to add an reward term with bias = 0.5, i.e., it should be larger than 0.5 anyway, but still the value is close to zero at the first iteration.

EricVoll commented 2 years ago

@zita-ch The reward-scale is multiplied with env.dt (default is 0.02). Then the rewards are summed in the env.episode_sums dictionary and printed here.

zita-ch commented 2 years ago

I got it. Thanks for your detailed explanation.

So first we get the average of the episode reward over the reset envs, then such a mean value is divided by the max_episode_length (in second, default 20). However, at the very beginning, most of the reset envs cannot stay alive to the last step of the horizon. So the episode rewards at the first dozens of iterations are very small, e.g., reward * 0.02/20.
I believe it makes sense, but this could be a little misleading. I think typically we do not average the episode reward on the horizon length especially when there is reset, although it does not affect the display of learning progress. Sometimes we may just want to observe the value averaged over the real episode length (that could be much smaller than max_episode_length). I personally suggest that in the next update you could add some explanations in tensorboard tab or code comments, or add another tensorboard tab.

Your works are quite inspiring and made me cling to your DRL framework now.
Best wishes ;)