yinboc / DGP

Rethinking Knowledge Graph Propagation for Zero-Shot Learning, in CVPR 2019
MIT License
320 stars 57 forks source link

About the GPU memory #16

Closed Hanzy1996 closed 4 years ago

Hanzy1996 commented 4 years ago

How much GPU memory do you use when training the dense gcn?

I am using the GTX2080TI whose memory is 11G. But it always raises an error: RuntimeError: CUDA out of memory.

Hanzy1996 commented 4 years ago

At the same time, I run another code on GPU0. I find setting CUDA_VISIBLE_DEVICES=1 doesn't work when running 'train_gcn_dense', and I still run this code on GPU0. I must set the '--gpu' manually in the args.

yinboc commented 4 years ago

I remember it should work with one GPU like 1080Ti.

"--gpu" resets "CUDA_VISIBLE_DEVICES", therefore you need to use "--gpu" to choose the GPU. You can check the code for details.

Hanzy1996 commented 4 years ago

Much appreciation for your reply. Another code has occupied a large amount of memory on 2080ti. And I failed to select the GPU with the command 'CUDA_VISIBLE_DEVICES=1 python xxx.py'. I find your code 'set_gpu' will set CUDA_VISIBLE_DEVICES again. So it's better to just use "--gpu" in your original code.

yinboc commented 4 years ago

Yes. Happy to hear you solved the issue.