Closed lpkoh closed 2 years ago
@lpkoh The TensorRT YOLO engines built with this repo would contain the "yolo_layer" plugin. So you'll have to call trtexec with something like the following.
$ ./trtexec --loadEngine=yolo/yolov4-tiny.trt --plugins=plugins/libyolo_layer.so
Hi, I am trying to test the speed of the trt models.
I ran yolo_to_onnx.py and onnx_to_tensorrt.py and successfully ran eval_yolo.py and saw the fps and bounding boxes.
However, I ran ./trtexec --loadEngine= + other settings for inference phase (reference: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#trtexec-serialized-timing-cache) to obtain a different fps estimate and I got "Error opening engine file".
Likewise, when I use the trt files I obtain from converting darknet models to onnx using tianxiaomo's directory, then convert those to trt using ./trtexec, those models are not usable in this repo.
Could you clarify on this issue?
Also a separate clarification, is the conversion in onnx_to_trt by default fp16?