Closed jatkinson1000 closed 2 months ago
Hi @ElliottKasoar after the work @jwallwork23 did there were some conflicts with your PR in #78.
I have done my best to rebase your work, but please could you take a look and see if everything seems in order to you?
@TomMelt You originally reviewed this PR and approved, but if you could take a quick glance and re-review it would be appreciated - since @jwallwork23 restructured the order of functions in the files and added additional arguments in the same place as @ElliottKasoar some of the merge conflicts got a little hairy so I may have missed the odd thing!
Before we merge we need to:
Added a note to the FAQ about eval and no_grad settings. A detailed example will perhaps wait until these are used as part of #111 since for now they are the sensible defaults for running inference.
I will also move some of @ElliottKasoar's points in his original comment to separate issues for future consideration.
Squashing and merging shortly.
This is an updated version of #78 rebased onto main after the GPU changes by @jwallwork23
@ElliottKasoar's comment on the original PR:
Resolves #73
Adds flags in all(?) functions that operate on tensors (tensor creation, model loading, forward) to optionally disable autograd, which should improve performance for inference.
Also adds a similar flag to set evaluation mode for the loaded model.
Evaluation mode
NoGradMode
with torch.no_grad():
).InferenceMode
Model freezing
(For more general explanation of autograd/evaluation mode, see autograd mechanics).
Note: I've also removed the old, commented out
torch_from_blob
function.