jkjung-avt / tensorrt_demos

TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet
https://jkjung-avt.github.io/
MIT License
1.74k stars 545 forks source link

Error in converting tensorflow inference model to tensorRT model for ssd-mobilenet-v2 #126

Closed PiyalGeorge closed 4 years ago

PiyalGeorge commented 4 years ago

Hi @jkjung-avt ,

I have some doubts regarding running ssd-mobilenet-v2 in jetson nano board. please help me to get a clarification on this.

I'm trying to implement tensorflow based object detection of two small objects - bottle and cup in jetson-nano using realtime camera input. As per my understanding ssd-mobilenet-v2 is giving best results balancing accuracy and real-time fps. Is ssd-mobilenet-v2 the best for this or is there any other options we have for small objects with real-time fps, and good accuracy?

I'm using tensorRT version 6, Tensorflow version 1.15 in jetson-nano board. Model is also trained in tensorflow version 1.15 in development machine. architecture i'm using is ssd-mobilenet-v2

Using your repo(thanks for this repo) i have used ssd-mobilenet-v2's already existing tensorRT model, And was able to get the output in almost realtime fps. I want to do the same with my custom trained model. I tried to convert the my custom trained tensorflow inference graph to tensorRT model, using command :-

python3 build_engine.py ssd_mobilenet_v2_coco

but was getting this error:

AttributeError: 'Nonetype' object has no attribute 'serialize'

I saw your reply on this same issue - https://github.com/jkjung-avt/tensorrt_demos/issues/80 .

But that didn't help me and getting confused on tensorflow version to use for development and board. Noticed your code was changed recently. Also saw the old code in branch 'ssd_mobilenet_v3', still couldn't figure out. Can you specify how you did this conversion, where all you make modifications, etc?

Also it would be great if you could tell me the way you did to avoid this error. In which version of tensorflow did you run the training and created the inference graph? And using which version of tensorflow did you convert inference to tensorRT model(i mean the tensorflow version in your board,)?

Kindly help me to solve this

jkjung-avt commented 4 years ago

This article summarized most of the problems you might encounter when trying to optimize a custom trained tensorflow SSD model through UFF to a TensorRT engine. Please have a look and see if it helps to resolve your problem.

https://www.minds.ai/post/deploying-ssd-mobilenet-v2-on-the-nvidia-jetson-and-nano-platforms

PiyalGeorge commented 4 years ago

@jkjung-avt , Thanks alot, it resolved my issue.

I would like to ask two more doubts:

  1. Which model is best to detect small objects in jetson-nano in real-time?
  2. How to do rtsp streaming in jetson-nano in real-time?(Is it possible?)
jkjung-avt commented 4 years ago
  1. I don't have a good answer. And I think it depends on your application. Faster-RCNN like models are generally better at detecting small objects but they are much slower than SSD. If you'd like to do some research, I'd suggest you to spend some time to study EfficientDet and YOLOv4.

  2. It is certainly possible. You could refer to my Tegra Camera Recorder blog post. Try to modify the VideoWriter Gstreamer string using an RTSP sink instead.