jkjung-avt / tensorrt_demos

TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet
https://jkjung-avt.github.io/
MIT License
1.74k stars 545 forks source link

Convert SSD model to ONNX instead of UFF #566

Closed wadhwasahil closed 2 years ago

wadhwasahil commented 2 years ago

I am using your code to convert my trained SSD MobileNet V2 model to Tensor RT format. You have used UFF format - https://github.com/jkjung-avt/tensorrt_demos/blob/a061e44a82e1ca097f57e5a32f20daf5bebe7ade/ssd/build_engine.py#L282

However I want to use Onnx instead of UFF. Once the graph is generated how can I convert it to ONNX instead of UFF?

Thanks

jkjung-avt commented 2 years ago

Please refer to the official sample by NVIDIA: https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api

wadhwasahil commented 2 years ago

So the thing is I have used tf.train.write_graph(dynamic_graph.as_graph_def(), ".","test.pb", as_text=False) to get a .pb file. However I am unable to load the .pb file. Is this a correct way?

jkjung-avt commented 2 years ago

I recommend to use export_inference_graph.py to export your custom trained model to pb. The resulting pb file would be named as "frozen_inference_graph.pb".

wadhwasahil commented 2 years ago

Actually I am using the frozen graph before running your code here - https://github.com/jkjung-avt/tensorrt_demos/blob/a061e44a82e1ca097f57e5a32f20daf5bebe7ade/ssd/build_engine.py I am having some issues with using UFF method because I want Explicit batch and UFF doesn't support that. Hence I am trying to see if I can generate an ONNX representation intermediately