xy-guo / LIGA-Stereo

Code for LIGA-Stereo Detector, ICCV'21
Apache License 2.0
91 stars 18 forks source link

GPU memory usage #3

Open revisitq opened 3 years ago

revisitq commented 3 years ago

The GPU memory usage reported in your paper is about 10G, but the GPU memory usage on my machine is about 18G when I train the model. Is there some different setting in the repo with your paper? image

revisitq commented 3 years ago

The validation memory usage is about 7G and the SECOND is not loaded during validation. image

xy-guo commented 3 years ago

Could you try run distributed training using only 1 gpu? The reason might be load the model in a single GPU multiple times.

xy-guo commented 3 years ago

Make sure you run the code using the script given in README

revisitq commented 3 years ago

Make sure you run the code using the script given in README

Thanks for your reply. I have try training with only 1 GPU by the command CUDA_VISIBLE_DEVICES='1' ./scripts/dist_train.sh 1 dev configs/stereo/kitti_models/liga.3d-and-bev.yaml, the GPU memory usage is still same. And here is the log. log_train.txt

xy-guo commented 3 years ago

If you train on multiple GPUs, are GPU memory usage roughly the same for every GPU? My model is trained on TiTAN X, and its memory is only 12 GB. Maybe you can print out real GPU memory assumption using pytorch APIs, sometimes pytorch will allocate more GPU than needed.

revisitq commented 3 years ago

If you train on multiple GPUs, are GPU memory usage roughly the same for every GPU? My model is trained on TiTAN X, and its memory is only 12 GB. Maybe you can print out real GPU memory assumption using pytorch APIs, sometimes pytorch will allocate more GPU than needed.

Actually the memory allocated is about 10G, but I don't know why the GPU memory usage is about 18G.

revisitq commented 3 years ago

If you train on multiple GPUs, are GPU memory usage roughly the same for every GPU? My model is trained on TiTAN X, and its memory is only 12 GB. Maybe you can print out real GPU memory assumption using pytorch APIs, sometimes pytorch will allocate more GPU than needed.

Actually the memory allocated is about 10G, but I don't know why the GPU memory usage is about 18G.

When training on multi-gpus, the gpu memory usage is same for every GPU. image

xy-guo commented 3 years ago

Maybe pytorch will pre-allocate GPU memory for future usage, which will not be freed automatically. Potential solutions include explicitly limiting GPU memory usage or torch.cuda.empty_cache() to free the cache.

revisitq commented 3 years ago

empty_cache

Thanks for help. I tried torch.cuda.empty_cache() but not working. I am looking for another solution.

zcspike commented 1 year ago

你好,能问一下,gpu oom的问题解决了吗