Open zhan0903 opened 4 years ago
@zhan0903 thanks for trying it out! So there could be 2 reasons for this:
torch.no_grad()
wrappers where neededDo you have a benchmarking setup to test these reasons?
Thanks! Kashif
@kashif thanks for your response. The TF version does not use the GPU either. I will try torch.no_grad() wrappers
. I test your code in my experiments.
Thanks Han
thank you!
@kashif Hi, I used the torch.no_grad()
in the backpropagation process for SAC, but it didn't improve the speed. The TF version doesn't use the GPU, but it is faster than the pytorch GPU version (TF version SAC takes 7000 seconds, while the pytorch GPU version takes around 15000 seconds).
I've observed the same thing in the official Spinning Up PyTorch SAC code. For whatever reason, it's just slower, even when you're being very careful to only calculate quantities that are absolutely necessary. I haven't figured out why, yet! Hopefully will crack this eventually.
Thanks @jachiam I'll have a look too perhaps after the icml deadline...
Hi, I ran this pytorch version SAC on Mujoco, which took time almost three times more than the original tf version code? Why did this happen? Is there any way to improve the speed?