Open caozhenxiang-kouji opened 6 years ago
What version of pytorch do you use?
The latest one, I just downloaded it from Github several hours ago. I don't know the exact version.
Did you install it with anaconda or build from sources?
No, just pip3 setup.py install
There's also warnings like this one, I 'not sure if it matters.
/home/user/RL/pytorch-a3c/test.py:44: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad():
instead.
state.unsqueeze(0), volatile=True), (hx, cx)))
Try to use a stable release.
They are going to introduce a lot of changes in the more recent version (the one that is currently developed in the master branch).
Which version do you use on this project?
0.3
I notice that your network is different from the network from the original paper of the DeepMind. Does it work better?
I'm using the architecture from open ai universe starter.
It works similarly. But it was easier to use it because they provide all hyper parameters.
I can't find a 0.3 version. Does 0.2 work as well?
Check the instructions in pytorch.org. There is 0.3 there.
I've installed 0.3, but the problem still exists. When I change no-shared to True, the program works just fine. Do you have any idea about this?
Have you tried training with no-shared=True? I tried it last night, but it's really hard to converge
any one tried in cuda()? I found the program stuck at x = F.elu(self.conv1(inputs))
@dragen1860 it's designed specifically to be efficient on CPU.
For a GPU optimized code see http://github.com/ikostrikov/pytorch-a2c-ppo-acktr
i have the same problem. Ubuntu16.04 Anaconda pytorch0.3.1
has anyone solved this issue?
@dragen1860 I was having the same problem. Adding this line in the main script solved my problem.
if __name__ == "__main__":
mp.set_start_method("spawn")
os.environ["OMP_NUM_THREADS"] = "1" # make sure numpy uses only one thread for each process
os.environ["CUDA_VISABLE_DEVICES"] = "" # make sure not to use gpu
@dragen1860 I was having the same problem. Adding this line in the main script solved my problem.
if __name__ == "__main__": mp.set_start_method("spawn") os.environ["OMP_NUM_THREADS"] = "1" # make sure numpy uses only one thread for each process os.environ["CUDA_VISABLE_DEVICES"] = "" # make sure not to use gpu
Two years later, your answer still helps! Thanks!
Can I run this code in google colab?
@dragen1860 I was having the same problem. Adding this line in the main script solved my problem.
if __name__ == "__main__": mp.set_start_method("spawn") os.environ["OMP_NUM_THREADS"] = "1" # make sure numpy uses only one thread for each process os.environ["CUDA_VISABLE_DEVICES"] = "" # make sure not to use gpu
thank you so much, for me this only worked until I uninstalled the cuda driver
After value, logit, (hx, cx) = model((Variable(state.unsqueeze(0)),(hx, cx))) in train.py, the program doesn't go on. Do you have any idea?