Closed sshillo closed 4 years ago
The same problem!
The same problem :)
I think you may need to do this in calculate_actor_loss
with torch.no_grad():
qf1_pi = self.critic_local(state_batch)
qf2_pi = self.critic_local_2(state_batch)
Because we already propagate loss for critical_local
and critic_local_2
in calculate_critic_losses
so policy_loss
will raise exception here? is it?
Hi I'm trying to run SAC Discrete and I keep getting following error
Any thoughts?