Open hljWmh opened 8 months ago
We apologize that we don't reproduce LightM-Unet on Windows temporarily. Could you please provide the error logs for the current issue you are facing? This will help us better pinpoint the problem.
你好我用自己的数据集在在显卡12g,16g内存下,在U-mamba能训练不报cuda oom 错误,但是在LightM-Unet会cuda oom,请问怎么处理? batch_size 改成2 也oom ng a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if i == u.shape[2] - 1: 2024-03-31 23:22:30.047454: Unable to plot network architecture: 2024-03-31 23:22:30.069439: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 12.00 GiB total capacity; 4.17 GiB already allocated; 6.76 GiB free; 4.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Hello, I use my own data set under the 16g memory of the graphics card. If I change the batchsize to 2, it will cause cuda oom in LightM-Unet. How to deal with it? 2024-04-25 09:17:44.026392: unpacking dataset... 2024-04-25 09:17:44.497184: unpacking done... 2024-04-25 09:17:44.497184: do_dummy_2d_data_aug: False 2024-04-25 09:17:44.528434: Unable to plot network architecture: 2024-04-25 09:17:44.528434: No module named 'hiddenlayer' torch.cuda.0utofemoryEror: CuDh out of menory. Tried to allocate 4.00 GiB.GPU 0 has a total capacty of16.0 GiB of which 3.72 GiB is free. 0f the alocated menory 10.69 GiB is allocatand 10.70 MiB is reserved by PyTorch but unallocated. If reserved but umallocated memory is lange try setting max splt size_mb to avoid fragmentation. See documentation fed by PyTorch,or Memory Management and PYTORCH_CUDA_ALLOC_CONF
Hello, I use BRaTS2021 dataset under the 16g memory of the graphics card.Have you delt with it? Thanks!
我的windows上也遇到这个问题