Closed Leengit closed 1 month ago
Together with Pull Request https://github.com/DigitalSlideArchive/ALBench/pull/63, this closes Issue #17. The two pull requests work best together but can be merged independently / either order.
I just force-pushed to rebase to the current main branch.
We currently specify a default batch size in the xml. Is there a better way to compute what we could use by inspecting how much GPU memory we have?
Is there a better way to compute what we could use by inspecting how much GPU memory we have?
Yes, we should try to achieve that. For the API, I can envision a few possibilities:
batchSize
and use it, regardless of what the user supplies for a batchSize
. (Ideally also removing the ability of the user to supply a batchSize
.)batchSize
(e.g., -1
) to specify that an optimal batch size should be computed and used.ALBench
or superpixel-classification
) that computes an optimal batchSize
. User then supplies the returned value in existing calls that allow a batchSize
.As for the implementation, that may depend upon torch
vs. tensorflow
but hopefully not on the model of GPU. Hopefully it's not too hard.
I've added Issue https://github.com/DigitalSlideArchive/superpixel-classification/issues/22 to track computation of an optimal batch size.
Put
torch
-basedmodel
s andtensor
s on GPU when a GPU is available.Note that the four git commits are different aspects of this pull request and could be reviewed separately.