If I understand correctly, the code in tensorflow2/actor_critic.py implements the One-step Actor-Critic (episodic) algorithm given on page 332 of RLbook2020 by Sutton/barto (picture given below).
Here we can see that the critic parameters w are updated only using the gradient of the value function for the current state S
which is represented as grad(V(S, w)) in the pseudocode shown above. The update skips the gradient of the value function for the next state S'. This can again be seen in the pseudocode above, there is nograd(V(S', w))present in the update rule for critic parameters w.
In the code given below, including state_value_, _ = self.actor_critic(state_) (L43) inside the GradientTape would result in grad(V(S', w)) appearing in the update for w, which contradicts the pseudocode shown above.
If I understand correctly, the code in tensorflow2/actor_critic.py implements the
One-step Actor-Critic (episodic)
algorithm given on page 332 of RLbook2020 by Sutton/barto (picture given below).Here we can see that the critic parameters w are updated only using the gradient of the value function for the current state S which is represented as
grad(V(S, w))
in the pseudocode shown above. The update skips the gradient of the value function for the next state S'. This can again be seen in the pseudocode above, there is nograd(V(S', w))
present in the update rule for critic parameters w.In the code given below, including
state_value_, _ = self.actor_critic(state_)
(L43) inside theGradientTape
would result ingrad(V(S', w))
appearing in the update for w, which contradicts the pseudocode shown above.https://github.com/philtabor/Youtube-Code-Repository/blob/1ef76059bf55f7df9ccc09fce0e0bfb7c13e89bd/ReinforcementLearning/PolicyGradient/actor_critic/tensorflow2/actor_critic.py#L40-L45
Please let me know if there are some gaps in my understanding!