onnx / tensorflow-onnx

Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
Apache License 2.0
2.26k stars 432 forks source link

Inputs/outputs for TensorFlow-Yolov3 #1120

Closed MuhammadAsadJaved closed 3 years ago

MuhammadAsadJaved commented 3 years ago

Hi, I have TensorFlow-Yolov3 model and I have both .ckpt and .pb weights. I am trying to convert to .onnx but failed, I am not sure about the inputs and outputs for the model.

1-How I can check the inputs and outputs? Because the main problem is inputs/outputs 2-If it's difficult to know about inputs/outputs , How i can convert .ckpt to saved_model as it does not require inputs/outputs? 3-I have tried to use these inputs and outputs which i used while converting .ckpt to .pb for running demo in my project, but it does not work.

This is the code I used to convert ckpt to .pb for demo in my system and this converted .pb weights do not have any problem.

trt

Now I have tried several inputs/outputs like these

python -m tf2onnx.convert --checkpoint ./modelIn/Pedestrian_yolov3_loss\=3.5331-nan.ckpt-29.meta  --output ./output/saved.onnx   --inputs  input_data:0,lwir_input_data:0  --outputs pred_sbbox:0, pred_mbbox:0, pred_lbbox:0
python -m tf2onnx.convert --checkpoint ./modelIn/  --output ./output/saved.onnx   --inputs  input/input_data:0, input/lwir_input_data:0  --outputs pred_sbbox/concat_2:0, pre_mbbox/concat_2:0, pred_lbbox/concat_2:0
MuhammadAsadJaved commented 3 years ago

Update: The issue is almost same like #773 I am able to convert my model using

python -m tf2onnx.convert --input modelInPb/saved_model.pb --inputs input/input_data:0[1,416,416,3], input/lwir_input_data:0[1,416,416,3]  --outputs pred_sbbox/concat_2:0,pred_sbbox/concat_2:0,pred_lbbox/concat_2:0 --output modelOut/model-verbos.onnx  --verbose --fold_const --opset 11

Now I have these questions 1- The original frozenGraph.pb model size is 492.9 MB and the converted .onnx model size is slightly reduced it is 482.5 MB. Is it normal or it should reduce more? 2-How I can use this converted onnx model for interface? or How I can convert it to trt engine?

guschmue commented 3 years ago

For 1 - yes, in general the onnx graph will be a little smaller. It could be a lot smaller if we find identical constants that can be de-duped. For 2 - not sure if I understand the question correctly but there is a tutorial directory that shows how to run them end to end, ssd-mobilenet might be a good example that is similar to yolo. We don't have a example for trt and should maybe create one.

In your command line above: --inputs and --outputs: make sure there are no spaces in the comma separated list. Its hard to find inputs and outputs of the model for frozen graph and checkpoint - that is just how tensorflow does it. Whenever possible use a saved-model format since that has the inputs and outputs defined.

TomWildenhain-Microsoft commented 3 years ago

Let us know if this solves your issue or if you have further questions.

MuhammadAsadJaved commented 3 years ago

Hi , there was holidays from 1 to 8 October. I will back and continue working on it from 9 October. I will update you soon. Thanks

On Thu, Oct 8, 2020 at 12:32 AM TomWildenhain-Microsoft < notifications@github.com> wrote:

Let us know if this solves your issue or if you have further questions.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/onnx/tensorflow-onnx/issues/1120#issuecomment-705053514, or unsubscribe https://github.com/notifications/unsubscribe-auth/AG4GR5CKZ4ZLEI35VP3TNCTSJSJ3HANCNFSM4R6QYVZA .

MuhammadAsadJaved commented 3 years ago

@TomWildenhain-Microsoft
I am able to convert .pb to .onnx on Nvidia Titan V GPU but on the GTX 1080Ti and Xavier NX the process is killed. The killed massage on Xavier NX is attached. I am using following command

python3 -m tf2onnx.convert --input modelInPb/saved_model.pb --inputs input/input_data:0[1,416,416,3],input/lwir_input_data:0[1,416,416,3] --outputs 
pred_sbbox/concat_2:0,pred_sbbox/concat_2:0,pred_lbbox/concat_2:0 --output modelOut/model.onnx  --opset 11

1- Is there any specific memory requirements for conversion process? 2- If I convert my .pb model to onnx and then onnx to .trt on Titan V , then can I use this converted model on Xavier NX or it need to convert on the same platform i.e (Xavier NX) ?

My ultimate goal is to use this converted model in real time in Xavier NX.

Screenshot from 2020-10-09 15-28-44

TomWildenhain-Microsoft commented 3 years ago

The onnx format is device-independent so it will not matter what device you convert on. The converter itself does not use the GPU. The converter loads the entire model into memory during conversion, so yes the memory requirements can be significant (though if your page file is sufficiently large you should be fine)

MuhammadAsadJaved commented 3 years ago

@TomWildenhain-Microsoft OK. got it. I have converted .pb to .onnx on another system. Now I am trying to convert .onnx to .trt engine. But I am having the following errors. (attached)

Is there any other way to convert .onnx to .trt ? How I can make sure that my converted .onnx do not have any problem? How I can verify it?

BTW I have attached my .pb and .onnx model here

https://drive.google.com/drive/folders/1uoCqNCMwNvrgW6TQ3Ox-3w_GM7Q8div5?usp=sharing

webwxgetmsgimg

TomWildenhain-Microsoft commented 3 years ago

I am using following command

python3 -m tf2onnx.convert --input modelInPb/saved_model.pb --inputs input/input_data:0[1,416,416,3],input/lwir_input_data:0[1,416,416,3] --outputs 
pred_sbbox/concat_2:0,pred_sbbox/concat_2:0,pred_lbbox/concat_2:0 --output modelOut/model.onnx  --opset 11

For outputs you included pred_sbbox/concat_2:0 twice. Did you mean pred_mbbox?

TomWildenhain-Microsoft commented 3 years ago

I can confirm that the models on Google Drive have successfully converted and produce consistent results. I pushed some test data through them and the outputs match. The conversion from onnx to trt is part of a different tool we don't maintain. I don't know of any other way to do it.

MuhammadAsadJaved commented 3 years ago

I am using following command

python3 -m tf2onnx.convert --input modelInPb/saved_model.pb --inputs input/input_data:0[1,416,416,3],input/lwir_input_data:0[1,416,416,3] --outputs 
pred_sbbox/concat_2:0,pred_sbbox/concat_2:0,pred_lbbox/concat_2:0 --output modelOut/model.onnx  --opset 11

For outputs you included pred_sbbox/concat_2:0 twice. Did you mean pred_mbbox?

Yes, it was a mistake. it's pred_mbbox.
As you said the model is converted successfully then How I can use this .onnx for inference? Is there any example for yolov3.onnx ?

TomWildenhain-Microsoft commented 3 years ago

You can run the model using onnxruntime: https://github.com/Microsoft/onnxruntime pip install onnxruntime

import onnxruntime as rt
inputs = { "inp_1:0": np.zeros(some_shape), ... }
output_names = ["Identity:0", ...]
m = rt.InferenceSession("path/to/model.onnx")
results = m.run(output_names, inputs)

Let me know if this resolves your issue.

MuhammadAsadJaved commented 3 years ago

You can run the model using onnxruntime: https://github.com/Microsoft/onnxruntime pip install onnxruntime

import onnxruntime as rt
inputs = { "inp_1:0": np.zeros(some_shape), ... }
output_names = ["Identity:0", ...]
m = rt.InferenceSession("path/to/model.onnx")
results = m.run(output_names, inputs)

Let me know if this resolves your issue.

Hi, sorry i do not understand this example. this is Yolov3.onnx model. Is there any end-to-end example to verify the converted .onnx model or can you give a clear example, please? Here I have attached original .pb weights and converted .onnx weights.

https://drive.google.com/drive/folders/1uoCqNCMwNvrgW6TQ3Ox-3w_GM7Q8div5?usp=sharing