LambdaLabsML / examples

Deep Learning Examples
MIT License
805 stars 103 forks source link

3090 and out of memory #48

Open dongyangli-del opened 1 year ago

dongyangli-del commented 1 year ago

RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 23.68 GiB total capacity; 20.57 GiB already allocated; 94.44 MiB free; 20.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Fu**king it! Can you help me?

By the way, all the size and works are 1 ! ! !

dongyangli-del commented 1 year ago

Well, My DRAM is 48GB, which is not a baby carrot.

XBD1998 commented 1 year ago

Have you solved the problem now?I had the same problem

AllenEdgarPoe commented 1 year ago

For my experience, you need at least >=30 VRAM.. I had the same issue even though I changed all batch size or grad size into the minimum, so I switched to A6000 and it consumes more than 30 VRAM.

RobinHan24 commented 1 year ago

how to use mutil-gpu, I have 8 A10 which has 24G VRAM. Thanks a lot.