Is your feature request related to a problem? Please describe.
All main pipelines need to support multi-GPU. In particular, the following aspects need to be investigated and possibly updated to accommodate it:
Distributed sampling, how do we distribute the data over multiple GPUs; each WSI to multiple or each GPU its own WSI? The former is probably the best.
Callbacks & writing, do we want more writing processes? This also depends a bit on the sampling strategy above.
Metrics and syncing, make sure we all-gather results from multiple GPUs.
Inference, due to its slightly different pipeline, it may need additional investigation.
Is your feature request related to a problem? Please describe. All main pipelines need to support multi-GPU. In particular, the following aspects need to be investigated and possibly updated to accommodate it: