Hi,
I was wondering how large slides were handled during inference and training. Was there any limit on the bag-size to prevent OOMs? If so can you clarify how the predictions from multiple bags were aggregated.
During training and testing, we adopted the batch size=1 scheme. At the same time, the half-precision floating-point training of the pytorch lightning framework helps TransMIL to process 20 times WSI features.
Hi, I was wondering how large slides were handled during inference and training. Was there any limit on the bag-size to prevent OOMs? If so can you clarify how the predictions from multiple bags were aggregated.
Thanks in advance.