onnx / onnx-tensorflow

Tensorflow Backend for ONNX
Other
1.29k stars 296 forks source link

onnx(nchw)>> tf.pb(nchw) #912

Open dingguodong-826 opened 3 years ago

dingguodong-826 commented 3 years ago

Thank you very much for your team's work. In the process of PyTorch >> OnNX >>tf.pb, the ONNX model was NCHW, but after converting ONNX to tf.pb, each convolutional layer was configured with an transpose operation. The reason for this is that you need to convert NCHW to TensorFlow's default NHWC format. Transpose operation caused problems in the quantization of the network later on. I would like to ask if there is a way to keep the NCHW format in OnNX >> TF.pb.

chinhuang007 commented 3 years ago

Have you tried the device = CUDA option, as described https://github.com/onnx/onnx-tensorflow/blob/master/doc/CLI.md? It will keep NCHW as much as possible, unless TF API doesn't support NCHW.

dingguodong-826 commented 3 years ago

Have you tried the device = CUDA option, as described https://github.com/onnx/onnx-tensorflow/blob/master/doc/CLI.md? It will keep NCHW as much as possible, unless TF API doesn't support NCHW.

Thank you for your reply. I have tried the method you mentioned,The execution command is: image but since the version of TensorFlow I want to use is 1.*, I cannot use the version above 2.0, and the following information appears when I use the command to convert: image After the conversion, the tf.py_function in pb file caused my model quantization to be unable to be completed. After looking for a solution, I used a program to convert the model. The program is as follows: image The conversion was successful with the above program, but it feels like the conversion was done on the CPU, because the following message is displayed at the end of the program execution image When configuring the environment, I configured it as Tensorflow-gpu 1.15,I don't know why。 The onnx model I want to convert [is](链接:https://pan.baidu.com/s/117a5HAOu6Vk9zFL8H_eMOw 提取码:1234 )