WangYueFt / dgcnn

MIT License
1.62k stars 420 forks source link

Cuda memory #44

Open shersoni610 opened 4 years ago

shersoni610 commented 4 years ago

Hello,

I am trying to run pytorch example on 6GB Cuda card and I get the following message:

RuntimeError: CUDA out of memory. Tried to allocate 640.00 MiB (GPU 0; 5.94 GiB total capacity; 4.54 GiB already allocated; 415.44 MiB free; 143.32 MiB cached)

How can we run the examples on 6GB cards?

Thanks

zxczrx123 commented 4 years ago

@shersoni610 I also had the same problem. My environment: Win10(I had changed some code so I can use it in Win10), One 1080Ti, Anaconda py3.6,Cuda 9.0,CUDNN 7.5,Pytorch1.1

I solved this problem by set num_workers=0 of DataLoader() in pytorch/main.py .And I also had try to smaller the batch size of train.But in the end I still use 32 and it works.

shersoni610 commented 4 years ago

Hello,

I tried changing num_workers = 0, but do not know how to change the batch size. The code still fails on 6GB Titan card.

Thanks

On Mon, Nov 25, 2019 at 12:49 AM zxczrx123 notifications@github.com wrote:

@shersoni610 https://github.com/shersoni610 I also had the same problem. My environment: Win10(I had changed some code so I can use it in Win10), One 1080Ti, Anaconda py3.6,Cuda 9.0,CUDNN 7.5,Pytorch1.1

I solved this problem by set num_workers=0 of DataLoader() in pytorch/main.py .And I also had try to smaller the batch size of train.But in the end I still use 32 and it works.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/WangYueFt/dgcnn/issues/44?email_source=notifications&email_token=ANZR6GTPX6ZPZO4TIBR67CDQVOGSDA5CNFSM4JQ5E7LKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFBTMGI#issuecomment-558052889, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANZR6GRE2OXQEK7FMJWU67TQVOGSDANCNFSM4JQ5E7LA .

nihil39 commented 4 years ago

Hi, @shersoni610 @zxczrx123

i'm having the same problem on a meager GT 1030 (RuntimeError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 1.95 GiB total capacity; 947.23 MiB already allocated; 24.25 MiB free; 1.02 GiB reserved in total by PyTorch)

Changing the number of workers does not help. By the way, what's the point on setting them to zero?

Any help with changing the batch size?

Thank you

Racketycomic commented 3 years ago

Setting the default value of test_batch_size argument in main.py from 16 to 8 worked for me