NVIDIA / object-detection-tensorrt-example

Running object detection on a webcam feed using TensorRT on NVIDIA GPUs in Python.
211 stars 61 forks source link

detect_objects_webcam.py can't run #7

Open GraceKafuu opened 5 years ago

GraceKafuu commented 5 years ago

TensorRT inference engine settings:

[TensorRT] ERROR: UffParser: Validator error: concat_box_loc: Unsupported operation _FlattenConcat_TRT Building TensorRT engine. This may take few minutes. [TensorRT] ERROR: Network must have at least one output Engine: None Traceback (most recent call last): File "detect_objects_webcam.py", line 190, in main() File "detect_objects_webcam.py", line 157, in main batch_size=args.max_batch_size) File "/home/nvidia/object-detection-tensorrt-example-master/SSD_Model/utils/inference.py", line 116, in init engine_utils.save_engine(self.trt_engine, trt_engine_path) File "/home/nvidia/object-detection-tensorrt-example-master/SSD_Model/utils/engine.py", line 91, in save_engine buf = engine.serialize() AttributeError: 'NoneType' object has no attribute 'serialize'

TheAeroes commented 5 years ago

I'm assuming you are not running through the docker container. To run it without, you need to compile the flattenConcat plugin in the TensorRT opensource repo. link to the repo : https://github.com/NVIDIA/TensorRT
link to the plugin: https://github.com/NVIDIA/TensorRT/tree/release/5.1/plugin/flattenConcat

An easier, insured way to get it running without docker is just following the commands in the dockerfile (in this repo) to setup a similar environment on your system. For your error you probably want to focus on the Tensorrt part of the dockerfile.

Just make sure you have the same version of cmake (3.14.4) and the cmake can see your cuda compiler.