TissueImageAnalytics / tiatoolbox

Computational Pathology Toolbox developed by TIA Centre, University of Warwick.
https://warwick.ac.uk/tia
Other
340 stars 71 forks source link

⚡️ Add `torch.compile` to `PatchPredictor` #776

Closed Abdol closed 3 months ago

Abdol commented 5 months ago

This mini-PR adds torch.compile functionality to PatchPredictor.

measty commented 5 months ago

At the moment the model is compiled before it is sent to the GPU (if GPU is being used). I think at least some of what torch.compile does is device-aware, so it may be better to compile after it is sent to device. Have you tried testing if the ordering makes a difference?

shaneahmed commented 5 months ago

At the moment the model is compiled before it is sent to the GPU (if GPU is being used). I think at least some of what torch.compile does is device-aware, so it may be better to compile after it is sent to device. Have you tried testing if the ordering makes a difference?

At the moment the model is compiled before it is sent to the GPU (if GPU is being used). I think at least some of what torch.compile does is device-aware, so it may be better to compile after it is sent to device. Have you tried testing if the ordering makes a difference?

Agree, this should be checked.

shaneahmed commented 5 months ago

Please enable tests for this branch.

codecov[bot] commented 4 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 99.89%. Comparing base (b2f57ee) to head (150678b).

Additional details and impacted files ```diff @@ Coverage Diff @@ ## enhance-torch-compile #776 +/- ## ====================================================== Coverage 99.89% 99.89% ====================================================== Files 69 69 Lines 8578 8589 +11 Branches 1641 1642 +1 ====================================================== + Hits 8569 8580 +11 Misses 1 1 Partials 8 8 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

Abdol commented 4 months ago

At the moment the model is compiled before it is sent to the GPU (if GPU is being used). I think at least some of what torch.compile does is device-aware, so it may be better to compile after it is sent to device. Have you tried testing if the ordering makes a difference?

@measty I believe the model is compiled only when the forward function is called. See link 1 and link 2.

Abdol commented 4 months ago

@shaneahmed torch.compile is not compatible with Python 3.12 (see here). This has triggered an error when running CI with Python 3.12:

https://github.com/pytorch/pytorch/issues/120233

if sys.version_info >= (3, 12): raise RuntimeError("Dynamo is not supported on Python 3.12+") E RuntimeError: Dynamo is not supported on Python 3.12+

Should we disable torch.compile for this version in the PR?