paulbauriegel / tensorflow-tools

Python Scripts for working with Tensorflow
MIT License
48 stars 7 forks source link

Converting from pytorch -> TF NHWC #3

Closed andreydung closed 4 years ago

andreydung commented 4 years ago

Thanks for the tool. I am wondering if you could provide an example of converting a Pytorch model (NCHW) to ONNX, then through your tool and get TF NHWC? Specifically, which version of onnx-tf did you use?

paulbauriegel commented 4 years ago

There is way more detailed documentation already available than I would be able to provide you here:

  1. Pytorch2ONNX: https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html
  2. Latest ONNX-tf form Github: https://github.com/onnx/onnx-tensorflow/blob/master/doc/API.md
  3. Then try this tool

If you provide me with a specific error message I might be able to help you better.

andreydung commented 4 years ago

Thanks a lot for the reply! There are two concrete problems that I met:

1) When running onnx-tf to convert from onnx model to a tensorflow graph (onnx_tf.backend.prepare), the generated graph is huge. For example, a convolutional block is converted to the following:

converted

Which version of onnx-tf did you use? Did you meet the same problem?

2) It seems that onnx_tf is trying to perform NHWC convolution on NCHW data. Currently the generated tflite accepts NCHW input. Is there anyway to convert the model such that input for pytorch is NCHW, but input for tflite is NHWC?

These problems do not relate to your library, but any advice is appreciated.

paulbauriegel commented 4 years ago

For 1. I actually need to reproduce that for one of my models. I can not remember having that sort of a problem. Two things I would do in your case:

  1. Try some other projects like https://github.com/nerox8664/pytorch2keras and then Keras to TFlite
  2. Ask that question in the onnx_tf project

For 2. When I did that 2 months ago I used the current master of onnx_tf at that time. And when converting to TFLite afterwards the results seemed valid. What you want to do from my perspective is to train in NCHW when using GPUs because its faster. And if you want to have inference on CPU with TFLite then convert the graph to NHWC. I have not found any other way then rebuilding the graph. Two things to keep in mind here:

Does that help you?

Lvhhhh commented 4 years ago

Thanks a lot for the reply! There are two concrete problems that I met:

  1. When running onnx-tf to convert from onnx model to a tensorflow graph (onnx_tf.backend.prepare), the generated graph is huge. For example, a convolutional block is converted to the following:

converted

Which version of onnx-tf did you use? Did you meet the same problem?

  1. It seems that onnx_tf is trying to perform NHWC convolution on NCHW data. Currently the generated tflite accepts NCHW input. Is there anyway to convert the model such that input for pytorch is NCHW, but input for tflite is NHWC?

These problems do not relate to your library, but any advice is appreciated.

who tell you “Currently the generated tflite accepts NCHW input” ? can you teach me how to do ?

paulbauriegel commented 4 years ago

@Lvhhhh Not sure I get your question right. But as far as I know TFLite does not accept NCHW so far, that's why I wrote this script. You can however train the model in something else then PyTorch or use a different inference framework such as OpenCV/OpenVINO