Open revisitq opened 3 years ago
The validation memory usage is about 7G and the SECOND is not loaded during validation.
Could you try run distributed training using only 1 gpu? The reason might be load the model in a single GPU multiple times.
Make sure you run the code using the script given in README
Make sure you run the code using the script given in README
Thanks for your reply. I have try training with only 1 GPU by the command CUDA_VISIBLE_DEVICES='1' ./scripts/dist_train.sh 1 dev configs/stereo/kitti_models/liga.3d-and-bev.yaml
, the GPU memory usage is still same. And here is the log.
log_train.txt
If you train on multiple GPUs, are GPU memory usage roughly the same for every GPU? My model is trained on TiTAN X, and its memory is only 12 GB. Maybe you can print out real GPU memory assumption using pytorch APIs, sometimes pytorch will allocate more GPU than needed.
If you train on multiple GPUs, are GPU memory usage roughly the same for every GPU? My model is trained on TiTAN X, and its memory is only 12 GB. Maybe you can print out real GPU memory assumption using pytorch APIs, sometimes pytorch will allocate more GPU than needed.
Actually the memory allocated is about 10G, but I don't know why the GPU memory usage is about 18G.
If you train on multiple GPUs, are GPU memory usage roughly the same for every GPU? My model is trained on TiTAN X, and its memory is only 12 GB. Maybe you can print out real GPU memory assumption using pytorch APIs, sometimes pytorch will allocate more GPU than needed.
Actually the memory allocated is about 10G, but I don't know why the GPU memory usage is about 18G.
When training on multi-gpus, the gpu memory usage is same for every GPU.
Maybe pytorch will pre-allocate GPU memory for future usage, which will not be freed automatically. Potential solutions include explicitly limiting GPU memory usage or torch.cuda.empty_cache() to free the cache.
empty_cache
Thanks for help. I tried torch.cuda.empty_cache()
but not working. I am looking for another solution.
你好,能问一下,gpu oom的问题解决了吗
The GPU memory usage reported in your paper is about 10G, but the GPU memory usage on my machine is about 18G when I train the model. Is there some different setting in the repo with your paper?