Closed EyGy closed 2 years ago
The onnx model produced by my "yolo_to_onnx.py" does not contain the "yolo" layers (there is no "yolo" operator in ONNX). Instead, the "168_convolutional_lgx", "185_convolutional_lgx" and "202_convolutional_lgx" are the convolutional layers right before the "yolo" layers.
On the other hand, my "onnx_to_tensorrt.py" would add yolo_layer plugin's into the network before generating the TensorRT engine. The code is here: https://github.com/jkjung-avt/tensorrt_demos/blob/f53b5ae9b004489463a407d8e9b230f39230d051/yolo/onnx_to_tensorrt.py#L123
If you need to do inference with the onnx model, you'll have to implement postprocessing code which does what the "yolo" layers would've done. You might reference my old "yolo.py" implementation here: https://github.com/jkjung-avt/tensorrt_demos/blob/e136d0c8459fe8e94c41cec8d43a7e7499656950/utils/yolo.py
Thank you for your fast answer and detailed explanation! I figuered it is easier for my case to do the cpu inference directly on the darknet-files with openCV (instead of using onnx).
It is kind of strange though that MS just seems to not care about yolo and in general onnx is not really reliable when i comes to support current sota methods.
Anyways, I really appreciate you answering so fast and thanks again for this repo, which is more than helpful for me! :)
Hi @jkjung-avt , first of all thank you a lot for this repo! I really appreciate all the great work you have put in this.
Something i can't really wrap my head around is that after using the yolo_to_onnx script for my own custom trained yolov4x-mish i get different Output Layers in my Onnx-model.
I expected the output Node to be like this:
And I got an output node that looks like this:
Further conversion to .trt seems to work fine, however I also need the onnx-model for my use case. Do you have any idea how i can get the onnx-model output in the desired format (see first image)? Maybe all I need to do is adding a final concat layer? Any help would be greatly appreciated!