Closed yiminglin-ai closed 6 years ago
Hi,
Thanks for pointing this out. Forgot to port that part of the code :) It should be fixed now. Thanks a lot again.
You're welcome. I also found that if you've got multiple GPUs and set gpu_id to 1, the code will still take around 500MB on GPU 0. I suppose this comes from the line
net.load_state_dict
which first loads the CudaTensors to GPU0 and then moves to GPU1. I tried several ways to free the 500 MB but none worked. It may be a bug from PyTorch itself but if you happen to know a solution to this please let me know :).
I had the same problem that the code loads 500MB in GPU 0. The workaround that I have found for now is using the following:
os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu_id)
device = torch.device("cuda:" + str(0) if torch.cuda.is_available() else "cpu")
Hi, I set the variable vis_net to 1 and run train_online.py but the following errors came out:
I think the following line should be put before visualisation.
net.to(device) # PyTorch 0.4.0 style
Nice work!