Closed qazi0 closed 3 years ago
Platform & OS:
Linux OMEN-by-HP-15-dc1xxx 5.4.0-72-generic #80~18.04.1-Ubuntu SMP Mon Apr 12 23:26:25 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
CUDA 11.0 (NVIDIA Driver 450.119.03)
TensorRT: 7.2.2.3
PyTorch: 1.7.1+cu110
PyCUDA: (2019, 1, 2)
The problem you've described is the same as this one: https://github.com/jkjung-avt/tensorrt_demos/issues/334
I have just updated the code in this repo to better support custom models. The "yolo_to_onnx.py" code will figure out the output conv layers automatically, so you don't need to modify the source code anymore.
Please just git pull
the latest code and try again.
The latest code fixed it! Thank you :)
Hi @jkjung-avt , I've noticed that the
onnx_to_tensorrt.py
script crashes in a segmentation fault if I try to build an engine from an onnx (and so a cfg) which contains more than 2 YOLO layers.The model whose engine I wish to build in TRT has 3 YOLO layers. Is this a limitation of this repository? If so do you have any ideas on how to get around with it? My model performs really poorly with 2 YOLO layers and I really need to build model with 3.
I've attached my config file that I'm using:
yolov4-tiny-aider-416.zip
I can provide you with the trained weights too. Please let me know if this is possible with this repo.
Sincere regards, Siraj