philtabor / Youtube-Code-Repository

Repository for most of the code from my YouTube channel
873 stars 479 forks source link

[TensorFlow2] Critic Loss Calculation for actor_critic #41

Open srihari-humbarwadi opened 2 years ago

srihari-humbarwadi commented 2 years ago

If I understand correctly, the code in tensorflow2/actor_critic.py implements the One-step Actor-Critic (episodic) algorithm given on page 332 of RLbook2020 by Sutton/barto (picture given below).

image

Here we can see that the critic parameters w are updated only using the gradient of the value function for the current state S which is represented as grad(V(S, w)) in the pseudocode shown above. The update skips the gradient of the value function for the next state S'. This can again be seen in the pseudocode above, there is no grad(V(S', w)) present in the update rule for critic parameters w.

In the code given below, including state_value_, _ = self.actor_critic(state_) (L43) inside the GradientTape would result in grad(V(S', w)) appearing in the update for w, which contradicts the pseudocode shown above.

https://github.com/philtabor/Youtube-Code-Repository/blob/1ef76059bf55f7df9ccc09fce0e0bfb7c13e89bd/ReinforcementLearning/PolicyGradient/actor_critic/tensorflow2/actor_critic.py#L40-L45

Please let me know if there are some gaps in my understanding!