Open debayan-gh opened 4 years ago
Hi @jackm321 , @jfix71 ,
Raised a PR for this. Please advise if we can add this constructor or if there is a better way to handle the above use case.
If the whole graph is lowerable, what's the difference of this and using fusion?
We're interested in supporting a solution which uses pyTorchModelLoader directly, and doesn't require going through torch_glow or involves the JIT interpreter. This would allow a standalone application (for example, image classifier) to load torchscript files and compile them - assuming, of course that all operations in the model are supported (or fail if not).
Such support exists for the other model loader - ONNXModelLoader, Caff2ModelLoader. Making it possible for the pyTorchModelLoader will put it on par with the existing ones.
The proposed changes might be a solution for the above - where we are dealing with only parameters and inputs and not involving the JIT interpreter. This does not affect the current PytorchModelLoader path for torch_glow.
@jackm321 can you take a look?
@jackm321
Is there a way to load torchscript traced files from disk and compile it using the PytorchModelLoader without using the Python torch_glow module. There existed a PytorchFileLoader as part of torch_glow, but was removed in #4866
Can we reintroduce this loadJITGraphForOnnxTraining() preferably with a different name? I can raise a PR for this.
e.g invocation (snippet based on the PytorchFileLoader code)
This loader will try to load a fully supported graph and will bail out if any of the ops is not supported. This can help standalone C++ applications to compile and run completely supported torchscript models for a specific glow backend without the complexity of creating Glow Fusion Node(s) and avoiding much of the torch_glow JIT execution path.
Thanks