nikhilbarhate99 / PPO-PyTorch

Minimal implementation of clipped objective Proximal Policy Optimization (PPO) in PyTorch
MIT License
1.71k stars 351 forks source link

Discounted Reward Calulcation (Generalized Advantage Estimation) #38

Open artest08 opened 3 years ago

artest08 commented 3 years ago

I want to ask one more thing about the estimation of discounted reward. The variable discounted reward always starts with zero. However, if the episode is not ended, should it be the value estimation from the critic network?

In other words, I think the pseudo code for my suggestion is if is_terminal: discounted_reward = 0 else: discounted_reward = critic_network(final_state)

gianlucadest commented 3 years ago

You are completely right. If the episode didn't end, you use the critic network (or the critic head of your twinheaded actor critic network) to approximate V(final_state)

nikhilbarhate99 commented 3 years ago

First, this repository does NOT use Generalized Advantage Estimation; it uses monte-carlo estimate for calculating rewards_to_go (reward variable in code) and advantages = rewards_to_go - V(s_t).

The only time we will get an unfinished trajectory is at the end. So an accurate version would be :

# Monte Carlo estimate of returns

rewards = []

if self.buffer.is_terminals[-1]:
   discounted_reward = 0
else:
   discounted_reward = self.policy_old.critic(self.buffer.states[-1]).item()

for reward, is_terminal in zip(reversed(self.buffer.rewards), reversed(self.buffer.is_terminals)):
    if is_terminal:
        discounted_reward = 0
    discounted_reward = reward + (self.gamma * discounted_reward)
    rewards.insert(0, discounted_reward)

Also, the rewards to go calculation introduced in issue #8 seems to be wrong. I am a little busy now and I will look into it later.

So the correct version, as other implementations use, might be just :

# Monte Carlo estimate of returns

rewards = []

if self.buffer.is_terminals[-1]:
   discounted_reward = 0
else:
   discounted_reward = self.policy_old.critic(self.buffer.states[-1]).item()

for reward in reversed(self.buffer.rewards):
    discounted_reward = reward + (self.gamma * discounted_reward)
    rewards.insert(0, discounted_reward)
gianlucadest commented 3 years ago

Your adjusted implementation is fine. I use the same semantic for my A2C rollouts where unfinished episodes are processed by calling critic(last_state). Else, the code just works with finished episodes.

Boltzmachine commented 2 years ago

This inaccuracy (maybe I should call it a bug) troubles me a lot! Thanks @nikhilbarhate99 Also, I guess there is another issue.

if the loop exit with t >= max_ep_len, A False will be added into buffer.is_terminals ([False, False, ..., False]). When the next episode begins, the buffer continues appending steps in the next episode without clearing the buffer if it did not call update ([False, False, ... False] (first episode) + [False, False, ..., True] (next episode)). So when we calculated the accumulated rewards, the rewards will be accumulated through the two episodes, which is not what we expect.

11chens commented 1 year ago

This inaccuracy (maybe I should call it a bug) troubles me a lot! Thanks @nikhilbarhate99 Also, I guess there is another issue.

if the loop exit with t >= max_ep_len, A False will be added into buffer.is_terminals ([False, False, ..., False]). When the next episode begins, the buffer continues appending steps in the next episode without clearing the buffer if it did not call update ([False, False, ... False] (first episode) + [False, False, ..., True] (next episode)). So when we calculated the accumulated rewards, the rewards will be accumulated through the two episodes, which is not what we expect.

I think we can follow the practice of the latest version of gym: add a variable truncated (bool): indicates whether the game is still incomplete when t=max_ep_len. Then in the discounted_reward loop add:


if truncated : discounted_reward = self.policy_old.critic(self.buffer.states[-1]).item() 
elif is_terminal: discounted_reward = 0