DigitalSlideArchive / superpixel-classification

A test cli for classifying superpixels with arbitrary labels
Other
5 stars 2 forks source link

Use torch device #21

Closed Leengit closed 1 month ago

Leengit commented 2 months ago

Put torch-based models and tensors on GPU when a GPU is available.

Note that the four git commits are different aspects of this pull request and could be reviewed separately.

Leengit commented 2 months ago

Together with Pull Request https://github.com/DigitalSlideArchive/ALBench/pull/63, this closes Issue #17. The two pull requests work best together but can be merged independently / either order.

Leengit commented 1 month ago

I just force-pushed to rebase to the current main branch.

manthey commented 1 month ago

We currently specify a default batch size in the xml. Is there a better way to compute what we could use by inspecting how much GPU memory we have?

Leengit commented 1 month ago

Is there a better way to compute what we could use by inspecting how much GPU memory we have?

Yes, we should try to achieve that. For the API, I can envision a few possibilities:

  1. Always compute an "optimal" batchSize and use it, regardless of what the user supplies for a batchSize. (Ideally also removing the ability of the user to supply a batchSize.)
  2. User passes in a specific nonsensical batchSize (e.g., -1) to specify that an optimal batch size should be computed and used.
  3. Add a new function to some package (e.g., ALBench or superpixel-classification) that computes an optimal batchSize. User then supplies the returned value in existing calls that allow a batchSize.

As for the implementation, that may depend upon torch vs. tensorflow but hopefully not on the model of GPU. Hopefully it's not too hard.

Leengit commented 1 month ago

I've added Issue https://github.com/DigitalSlideArchive/superpixel-classification/issues/22 to track computation of an optimal batch size.