Closed akidway closed 7 months ago
Thanks for attention!
The paper gives the memory usage and train time per epoch on RTX 3090 GPU. This is the screenshot:
While each WSI has tens of thousands of patches, the MIL paradigm typically only deals with the features of the patches (typically 1024 or 512 dimensions) rather than the pixels. As a result, MIL methods are generally less computationally expensive. And MHIM performs equally well, with large efficiency gains, especially in the Transformer-like baseline (like TransMIL). Details can be found in the paper.
Thanks a lot.
Hi, @DearCaat Thank you for your nice work. In the paper, the batch size for training the Multiple Instance Learning (MIL) model is set to 1 bag. This means a single batch consists of approximately 8,000 images of size 512x512. I'm wondering if this is sufficient for a 24GB RTX 3090 GPU. Could you please provide information on the GPU memory usage during the training of the MIL model?