Open ayazhassan opened 1 year ago
Hi, you should check your self.infer(img)
return value, it should be like: value_a, value_b = self.infer(img)
But, I used the same tensorrt script for inference as provided on github. It requires 4 outputs from self.infer(...) I noticed that there is a difference in onnx model created on two different machines (may be the difference in onnx runtime version). PFA the model differences. Can you please elaborate on this? I am using exactly the same yolo model and export file.
[image: image.png] Regards, Dr. Ayaz ul Hassan Khan Ph.D. Computer Science and Engineering Assistant Professor Computer Engineering Department, College of Computing and Mathematics King Fahd University of Petroleum and Minerals Dhahran, Kingdom of Saudi Arabia My Personal Website https://sites.google.com/site/ayazresearch/
On Mon, Oct 31, 2022 at 2:23 PM Johnsonnnn @.***> wrote:
Hi, you should check your self.infer(img) return value, it should be like: value_a, value_b = self.infer(img)
— Reply to this email directly, view it on GitHub https://github.com/WongKinYiu/yolov7/issues/1016#issuecomment-1296947162, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABV5D2FB247AGBFN5UBD5P3WF6T27ANCNFSM6AAAAAARSITVOU . You are receiving this because you authored the thread.Message ID: @.***>
Hi @ayazhassan ,
In colab example, it exported ONNX model with --include-nms
flag, and there will be an EfficientNMS_TRT
layer at the end of the model, and make the outputs' dimension equals to 4: num_dets
, det_boxes
, det_scores
, det_classes
.
If you export ONNX model without --include-nms
(or you did but failed), the output dimension will be only 2: boxes
and classes
.
You should check if you export ONNX model successfully, read the export log line by line. (In my case, I failed due to an import error occured while trying to import onnx_graphsurgeon
in utils/add_nms.py
).
Let me know if this help, thanks very much!
Thank you for the clarification.
Regards, Dr. Ayaz ul Hassan Khan Ph.D. Computer Science and Engineering Assistant Professor Computer Engineering Department, College of Computing and Mathematics King Fahd University of Petroleum and Minerals Dhahran, Kingdom of Saudi Arabia My Personal Website https://sites.google.com/site/ayazresearch/
On Tue, Dec 20, 2022 at 7:27 AM yamiefun @.***> wrote:
Hi,
In colab example, it exported ONNX model with --include-nms flag, and there will be an EfficientNMS_TRT layer at the end of the model, and make the outputs' dimension equals to 4: num_dets, det_boxes, det_scores, det_classes. [image: image] https://user-images.githubusercontent.com/34006713/208582713-63de7a2e-1684-4758-9a84-d12462b5e8b0.png
If you export ONNX model without --include-nms (or you did but failed), the output dimension will be only 2: boxes and classes [image: image] https://user-images.githubusercontent.com/34006713/208583562-eb83893f-2f9a-4fa0-9edb-ce8fb106c52f.png
You should check if you export ONNX model successfully, read the export log line by line. (In my case, I failed due to an import error occured while trying to import onnx_graphsurgeon in utils/add_nms.py).
— Reply to this email directly, view it on GitHub https://github.com/WongKinYiu/yolov7/issues/1016#issuecomment-1358825948, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABV5D2A4EITWCD2RUOX3B5DWOEYTXANCNFSM6AAAAAARSITVOU . You are receiving this because you authored the thread.Message ID: @.***>
I am getting the following error while running the tensorrt engine one of machine while it was working in another machine. Can you please tell me what could be issue here?
Traceback (most recent call last): File "/home/dinahaamed/Projects/ONNX/trt-engine-w4.py", line 323, in
origin_img, t = pred.queue_inference()
File "/home/dinahaamed/Projects/ONNX/trt-engine-w4.py", line 86, in queue_inference
num, final_boxes, final_scores, final_cls_inds = self.infer(img)
ValueError: not enough values to unpack (expected 4, got 2)