Closed Abdol closed 3 months ago
At the moment the model is compiled before it is sent to the GPU (if GPU is being used). I think at least some of what torch.compile does is device-aware, so it may be better to compile after it is sent to device. Have you tried testing if the ordering makes a difference?
At the moment the model is compiled before it is sent to the GPU (if GPU is being used). I think at least some of what torch.compile does is device-aware, so it may be better to compile after it is sent to device. Have you tried testing if the ordering makes a difference?
At the moment the model is compiled before it is sent to the GPU (if GPU is being used). I think at least some of what torch.compile does is device-aware, so it may be better to compile after it is sent to device. Have you tried testing if the ordering makes a difference?
Agree, this should be checked.
Please enable tests for this branch.
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 99.89%. Comparing base (
b2f57ee
) to head (150678b
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
At the moment the model is compiled before it is sent to the GPU (if GPU is being used). I think at least some of what torch.compile does is device-aware, so it may be better to compile after it is sent to device. Have you tried testing if the ordering makes a difference?
@measty I believe the model is compiled only when the forward
function is called. See link 1 and link 2.
@shaneahmed torch.compile
is not compatible with Python 3.12 (see here). This has triggered an error when running CI with Python 3.12:
https://github.com/pytorch/pytorch/issues/120233
if sys.version_info >= (3, 12): raise RuntimeError("Dynamo is not supported on Python 3.12+") E RuntimeError: Dynamo is not supported on Python 3.12+
Should we disable torch.compile
for this version in the PR?
This mini-PR adds
torch.compile
functionality toPatchPredictor
.