laugh12321 / TensorRT-YOLO

🚀 你的YOLO部署神器。TensorRT Plugin、CUDA Kernel、CUDA Graphs三管齐下,享受闪电般的推理速度。| Your YOLO Deployment Powerhouse. With the synergy of TensorRT Plugins, CUDA Kernels, and CUDA Graphs, experience lightning-fast inference speeds.
https://github.com/laugh12321/TensorRT-YOLO
GNU General Public License v3.0
540 stars 67 forks source link

[Help]: trtexec export tensorrt model failed #13

Closed fungtion closed 5 months ago

fungtion commented 5 months ago

when running trtexec --onnx=model.onnx --saveEngine=model.engine --fp16, it failed to create engine because could not find any implementation for node /model/model/model.0/conv/Conv

image

laugh12321 commented 5 months ago

Is model.onnx exported using export.py? What is the version of TensorRT you're using? Could you provide the model and its version for testing purposes?

fungtion commented 5 months ago

Sorry for late. I'm using TensorRT 8.6.1.6 + cuda 11.8 + cudnn 8.8 to export yolov9-c-converted.onnx to TensorRT model. I test this configuration on 4090 and P4, and it works fine, but when I used the same configuration on Titan x, it reported this error.

laugh12321 commented 5 months ago

If you're encountering export failures specifically on certain devices, it could be an issue related to TensorRT. You may want to refer to https://github.com/NVIDIA/TensorRT/issues/3640 for more information and potential solutions.

fungtion commented 5 months ago

Thanks, I will check that