Closed kaczmarj closed 1 year ago
we might get speedups by using multiple gpus. this would require two changes:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
to use all available gpus. currently we specify "cuda:0" which uses the first available gpu.
"cuda:0"
second, we need to wrap the model in another class:
model = nn.DataParallel(model)
a good test of this would be the TIL model because it will have the most patches per slide (because the patch sizes are small).
we might get speedups by using multiple gpus. this would require two changes:
to use all available gpus. currently we specify
"cuda:0"
which uses the first available gpu.second, we need to wrap the model in another class:
a good test of this would be the TIL model because it will have the most patches per slide (because the patch sizes are small).