Open wjd1009 opened 6 months ago
Me too,I run this on RTX3090 24GB and got the same problem.I change the bitchsize to 1,success finally.So I have a question, is this a lightweight model?
Yes, I am also confused about this problem, my data set is small, only 300 images, but running it on 16G, I still have a cuda memory problem
I tried training on 3080 and then 3090. Both time ran into memory issues. When trying to decrease batch size from 2 to 1, I get the following error:
RuntimeError: The size of tensor a (4) must match the size of tensor b (5) at non-singleton dimension 2
Has anyone experienced the same?
32.2G memory is all it need!
32.2G memory is all it need!
it's not light
Hello, I use my own data set under the 16g memory of the graphics card. If I change the batchsize to 2, it will cause cuda oom in LightM-Unet. How to deal with it? 2024-04-25 09:17:44.026392: unpacking dataset... 2024-04-25 09:17:44.497184: unpacking done... 2024-04-25 09:17:44.497184: do_dummy_2d_data_aug: False 2024-04-25 09:17:44.528434: Unable to plot network architecture: 2024-04-25 09:17:44.528434: No module named 'hiddenlayer' torch.cuda.0utofemoryEror: CuDh out of menory. Tried to allocate 4.00 GiB.GPU 0 has a total capacty of16.0 GiB of which 3.72 GiB is free. 0f the alocated menory 10.69 GiB is allocatand 10.70 MiB is reserved by PyTorch but unallocated. If reserved but umallocated memory is lange try setting max splt size_mb to avoid fragmentation. See documentation fed by PyTorch,or Memory Management and PYTORCH_CUDA_ALLOC_CONF