pytorch / glow

Compiler for Neural Network hardware accelerators
Apache License 2.0
3.23k stars 692 forks source link

Loading saved torchscript model with PytorchModelLoader #5035

Open debayan-gh opened 4 years ago

debayan-gh commented 4 years ago

@jackm321

Is there a way to load torchscript traced files from disk and compile it using the PytorchModelLoader without using the Python torch_glow module. There existed a PytorchFileLoader as part of torch_glow, but was removed in #4866

Can we reintroduce this loadJITGraphForOnnxTraining() preferably with a different name? I can raise a PR for this.

e.g invocation (snippet based on the PytorchFileLoader code)

  auto method = module->get_method("forward");
  auto graphAndTensors =
      torch::jit::LowerGraph(*method.graph(), module->_ivalue());

  auto graph = graphAndTensors.first;
  //optimizations..

  PyTorchModelLoader::loadJITGraphWithParameters(
      F, *graphAndTensors.first, inputs, graphAndTensors.second,
      inputPlaceholders, outputPlaceholders)

This loader will try to load a fully supported graph and will bail out if any of the ops is not supported. This can help standalone C++ applications to compile and run completely supported torchscript models for a specific glow backend without the complexity of creating Glow Fusion Node(s) and avoiding much of the torch_glow JIT execution path.

Thanks

debayan-gh commented 4 years ago

Hi @jackm321 , @jfix71 ,

Raised a PR for this. Please advise if we can add this constructor or if there is a better way to handle the above use case.

yinghai commented 4 years ago

If the whole graph is lowerable, what's the difference of this and using fusion?

debayan-gh commented 4 years ago

We're interested in supporting a solution which uses pyTorchModelLoader directly, and doesn't require going through torch_glow or involves the JIT interpreter. This would allow a standalone application (for example, image classifier) to load torchscript files and compile them - assuming, of course that all operations in the model are supported (or fail if not).

Such support exists for the other model loader - ONNXModelLoader, Caff2ModelLoader. Making it possible for the pyTorchModelLoader will put it on par with the existing ones.

The proposed changes might be a solution for the above - where we are dealing with only parameters and inputs and not involving the JIT interpreter. This does not affect the current PytorchModelLoader path for torch_glow.

yinghai commented 4 years ago

@jackm321 can you take a look?