violet17 / yolov5_demo

OpenVINO demo & Convert to OpenVINO IR ==完整又详细的Pytorch到OpenVINO转换流程 ><不点进来看看吗
Apache License 2.0
95 stars 32 forks source link

too many values to unpack #9

Open kuonumber opened 3 years ago

kuonumber commented 3 years ago

[ INFO ] Creating Inference Engine... [ INFO ] Loading network: yolov5s_v3.xml [ INFO ] Preparing inputs [ INFO ] Loading model to the plugin [ INFO ] Starting inference... To close the application, press 'CTRL+C' here or switch to the output window and press ESC key To switch between sync/async modes, press TAB key in the output window [ INFO ] Layer 412 parameters: [ INFO ] classes : 80 [ INFO ] num : 3 [ INFO ] coords : 4 [ INFO ] anchors : [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 326.0] /usr/lib/python3/dist-packages/apport/report.py:13: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import fnmatch, glob, traceback, errno, sys, atexit, locale, imp, stat Traceback (most recent call last): File "yolov5_demo_OV2021.3.py", line 412, in sys.exit(main() or 0) File "yolov5_demo_OV2021.3.py", line 329, in main objects += parse_yolo_region(out_blob.buffer, in_frame.shape[2:], File "yolov5_demo_OV2021.3.py", line 162, in parse_yolo_region out_blob_n, out_blob_c, out_blob_h, out_blob_w = blob.shape ValueError: too many values to unpack (expected 4)

Hi , Have you ever faced this problem?

kuonumber commented 3 years ago

In order to reuse your code, I have used the requirements.txt from yolov5 tag v3.0, and changed opset_version to10. Can you give some suggestions for me ?? Thx

kuonumber commented 3 years ago

Btw I used openvino from https://github.com/openvinotoolkit/openvino.git -b 2021.3.

violet17 commented 3 years ago

@kuonumber Hi , sorry for the late reponse. Have solved this problem? Can you print blob.shape?

aoi127 commented 3 years ago

@kuonumber Hi , sorry for the late reponse. Have solved this problem? Can you print blob.shape?

I also met the problem, and blob.shape = (1, 3, 40, 40, 6). Could you plz help me? Thanks a lot.

gfleg77 commented 2 years ago

@kuonumber Hi , sorry for the late reponse. Have solved this problem? Can you print blob.shape?

I also met the problem, and blob.shape = (1, 3, 40, 40, 6). Could you plz help me? Thanks a lot.

1) Edit the export.py inside the yolov5 folder 2) You will find inside the export.py the following "opset_version=opset" 3) Change to opset_version=10 4) Run the export for the yolov5s.pt to ONNX 5) Open the created onnx file with netron and search for "transpose" and click in one result 6) Above transpose you will find three "Conv" and click on each of them to see the name in the node properties. 7) In my case the names were Conv_264,Conv_230,Conv_196 8) Use these names in the command for the OpenVino model optimizer(mo.py): "mo --input_model yolov5s.onnx -s 255 --reverse_input_channels --output Conv_264,Conv_230,Conv_196" 9) It will create the xml and bin files to use with the code [yolov5_demo_OV2021.3.py] 10) Enjoy the demo :)

aoi127 commented 2 years ago

@kuonumber Hi , sorry for the late reponse. Have solved this problem? Can you print blob.shape?

I also met the problem, and blob.shape = (1, 3, 40, 40, 6). Could you plz help me? Thanks a lot.

  1. Edit the export.py inside the yolov5 folder
  2. You will find inside the export.py the following "opset_version=opset"
  3. Change to opset_version=10
  4. Run the export for the yolov5s.pt to ONNX
  5. Open the created onnx file with netron and search for "transpose" and click in one result
  6. Above transpose you will find three "Conv" and click on each of them to see the name in the node properties.
  7. In my case the names were Conv_264,Conv_230,Conv_196
  8. Use these names in the command for the OpenVino model optimizer(mo.py): "mo --input_model yolov5s.onnx -s 255 --reverse_input_channels --output Conv_264,Conv_230,Conv_196"
  9. It will create the xml and bin files to use with the code [yolov5_demo_OV2021.3.py]
  10. Enjoy the demo :)

Thank you.