szc19990412 / TransMIL

TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification
363 stars 73 forks source link

CUDA out of memory #1

Closed zxcvbml closed 2 years ago

zxcvbml commented 3 years ago

Thank you for codes, it help me lot Why CUDA still out of memory, when I set batch_size equal to 1,num_workers=0,gpu=[0,1,2,3](4 GPUs)? I have tried it many times ,even 6 GPUs. and I set the multi_gpu_mode = ddp_spawn/ddp/dp?

If you have any advices , please tell me , Thank you very much.

LXYTSOS commented 2 years ago

same here, no matter how many GPU used, still get OOM on each GPU.