DearCaat / RRT-MIL

[CVPR 2024] Feature Re-Embedding: Towards Foundation Model-Level Performance in Computational Pathology
85 stars 8 forks source link

run multiple GPUs #3

Closed sls-peanut closed 8 months ago

sls-peanut commented 8 months ago

Hi, Dr.how can I set multiple GPUs to run?

DearCaat commented 8 months ago

Good question... In fact, the batch size for WSI classification is all 1, because each bag contains a different number of instances. I have not tried any kind of multi-host or multi-GPU training in WSI classification. Maybe torch.nn.DataParallel is an ok solution? I'm not quite sure, sorry, it may take some attempts.

sls-peanut commented 8 months ago

好问题... 事实上,batch sizeWSI 的分类是全部1,因为每个包都包含不同数量的实例。我还没有在 WSI 分类中尝试过任何类型的多主机新生儿 GPU 训练。也许torch.nn.DataParallel是一个好的解决方案我不太 确定,抱歉,可能需要一些尝试。

Okay,Thank you for your reply. I can run on a TiTanX 12GB memory. But if I run on a 3070, 8GB of memory is not enough

Error: CUDA out of memory

DearCaat commented 8 months ago

u can also try to increase the value of region_num, more details can be found in this.

sls-peanut commented 8 months ago

u can also try to increase the value of region_num, more details can be found in this.

I can run it! Thank you again, Dr. Tang of the future. Or future Yangtze River scholars or future academicians...