kashif / firedup

Clone of OpenAI's Spinning Up in PyTorch
MIT License
146 stars 25 forks source link

Much slower than original spinningup tf version #8

Open zhan0903 opened 4 years ago

zhan0903 commented 4 years ago

Hi, I ran this pytorch version SAC on Mujoco, which took time almost three times more than the original tf version code? Why did this happen? Is there any way to improve the speed?

kashif commented 4 years ago

@zhan0903 thanks for trying it out! So there could be 2 reasons for this:

  1. I am calculating the gradients of the computational graph un-necessarily (thats where TF is better since it only runs the part of the graph that is needed) and a solution might be to add torch.no_grad() wrappers where needed
  2. TF will use the GPU if you are running on a GPU machine whereas currently I only run on the CPU

Do you have a benchmarking setup to test these reasons?

Thanks! Kashif

zhan0903 commented 4 years ago

@kashif thanks for your response. The TF version does not use the GPU either. I will try torch.no_grad() wrappers. I test your code in my experiments.

Thanks Han

kashif commented 4 years ago

thank you!

zhan0903 commented 4 years ago

@kashif Hi, I used the torch.no_grad() in the backpropagation process for SAC, but it didn't improve the speed. The TF version doesn't use the GPU, but it is faster than the pytorch GPU version (TF version SAC takes 7000 seconds, while the pytorch GPU version takes around 15000 seconds).

jachiam commented 4 years ago

I've observed the same thing in the official Spinning Up PyTorch SAC code. For whatever reason, it's just slower, even when you're being very careful to only calculate quantities that are absolutely necessary. I haven't figured out why, yet! Hopefully will crack this eventually.

kashif commented 4 years ago

Thanks @jachiam I'll have a look too perhaps after the icml deadline...