Closed ytan101 closed 2 years ago
Hi! I met same problem of cuda out of memory with 4 Tesla V100s(16GB): RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 1; 15.75 GiB total capacity; 4.74 GiB already allocated; 26.6 2 MiB free; 4.82 GiB reserved in total by PyTorch)
Could you please provide some suggestions on how to solve it? I have tried to degrade samples per gpu to 1.
@carry-all-coder Hi,I met same problem of cuda out of memory,have you solved it?Could you provide some suggestions,thanks!
Hi, I tried to train the model on nuscenes-mini with 2 Tesla V100s, but still get out of memory error (referencing issue 34 where 16GB should be enough). Is there any specific configuration I can tweak to help with this issue?
Thank you very much!