longcw / faster_rcnn_pytorch

Faster RCNN with PyTorch
MIT License
1.73k stars 466 forks source link

Is 6G GPU memory enough for training? #42

Open squirrel233 opened 7 years ago

squirrel233 commented 7 years ago

I have 2 GPUs on my PC, each has 6G memory. I can train rbg's py-faster-rcnn project on one of them.But when I run /faster_rcnn_pytorch/train.py of this project , suddenly out of memory.

I refer to FFRCNN project, they said that

For training the end-to-end version of Faster R-CNN with VGG16, 3G of GPU memory is sufficient (using CUDNN)

So I'm very confused How big memory do I need to run /faster_rcnn_pytorch/train.py ? Or, Could this run on 2 GPUs in parallel?

Thanks.

bywbilly commented 7 years ago

I think 6G memory is enough. When I trained with running train.py, it cost about 4G memory.

jilner commented 6 years ago

I think 6G memory is enough. When I trained with running train.py, it cost about 4G memory.

hello I run the train.py with 8G gpu ,but it out of memory,I used torch 0.4.1,what shuoud I do if I want to run the train, which parameter can i tun? thank you very much