Linaom1214 / TensorRT-For-YOLO-Series

tensorrt for yolo series (YOLOv10,YOLOv9,YOLOv8,YOLOv7,YOLOv6,YOLOX,YOLOv5), nms plugin support
929 stars 158 forks source link

Failed inference of custom yolov7 model due to broadcast of input array #137

Open obidare-folu opened 3 months ago

obidare-folu commented 3 months ago

Hello. I trained a custom yolov7 model and converted it to onnx using https://github.com/WongKinYiu/yolov7/blob/main/export.py on a Jetson Xavier device with flags --img-size 512 512 --batch 1. I have 3 classes. I cloned this repository and used the export.py file with command python3 export.py -o ../models/yolov7/runs/train/fold1/weights/onnx_trial.onnx -e ../models/yolov7/runs/train/fold1/weights/onnx_trial.trt to convert it into trt engine. These are the first and last few logs from running the command: image

image

I tried to run both video and image inference using command python3 trt.py -e ../models/yolov7/runs/train/fold1/weights/onnx_trial.trt -v 0 but none of them worked. Here is the output: image

I have changed self.n_classes to 3 in the trt.py file and also in the 'utils.py' file. I also tried using --calib_batch_size 1 but it didn't work.

Linaom1214 commented 3 months ago

obidare-folu

the onnx must include the decode result.