Closed yhcao6 closed 6 years ago
How are you setting it?
A good example of doing so is command line sample on readme
python main.py --env PongDeterministic-v4 --workers 32 --gpu-ids 0 1 2 3 --amsgrad True
Still have problem, don't know why, torch version is Version: 0.4.0a0.post2
If I set all gpu_id to be same for train and test process, then it is okay
Is your repo current to repo on GitHub? What are you running this on? I have never tried running on version 0.4.0a0.post2 yet. May be a bug in pytorch
Yeah, I just git clone this repo. I think also, maybe it is a bug of pytorch
yeah try with a more stable version like 0.3 and see if problem still persists
When I run the program on multi gpu, that is, I set gpu_id to be [0, 1, 2, 3], it report error "Some of weight/gradient/input teSLTM nsors are located on different GPUs. Please move them to a single one"