NVIDIA-AI-IOT / yolo_deepstream

yolo model qat and deploy with deepstream&tensorrt
Apache License 2.0
549 stars 139 forks source link

[Solution] TensorRT 8.0.1 engine for YOLOv4 in standalone mode #21

Open frenky-strasak opened 3 years ago

frenky-strasak commented 3 years ago

To compile the yolov4 models with batched NMS, I had to change this:

  1. comment builder->allowGPUFallback(true); in SampleYolo.cpp. According docs the allowGPUFallback was removed from TensorRT 8.0.1. If I understand well, I do not have to care about it in TensorRT 8.0.1. Am I right?
  2. comment all MYELIN_LIB and ENABLE_MYELIN, in Makefile.config to prevent error: /usr/bin/ld: cannot find -lmyelin because it seems to me that TensorRT 8.0.1 does not use it, but I found nothing about it. Do you have any idea?

If you do these 2 steps, then you can compile it and it works. Good luck!