BobLd / YOLOv4MLNet

Use the YOLO v4 and v5 (ONNX) models for object detection in C# using ML.Net
MIT License
79 stars 31 forks source link

Parsing a different custom model #14

Closed Westy-Dev closed 2 years ago

Westy-Dev commented 2 years ago

First of all, many thanks for your work, this area seems not very talked about and your work has been invaluable to my progress.

I have had a look at the provided ONNX model and the code that is used to parse and predict with it.

I am now looking to modify this to work for my own ONNX model, but have found one major difference in the model shape, and I am not sure how to get around this issue.

I have an extra "output" with shape [1, 25200, 20] and I'm unsure what to do with it. Also the rest of the output is of a slightly different format [1,3,80,80,20] vs [1, 52, 52, 3, 85] for example.

See my netron output of my ONNX image

I exported my model from a Yolov5 format using the export.py from Ultralytics, as follows:

sudo python3 export.py --data <.yaml> --weights <.pt> --img 640 --batch 1 --opset 9 --include onnx --simplify

Any help or guidance anyone can give would be greatly appreciated.

Edit: I am looking at parsing and inference in C#

BobLd commented 2 years ago

Hi @Westycoot, thanks for your message.

Just to make sure, did you have a look at the below (from the readme)?

YOLO v5 in ML.Net

Use YOLO v5 with ML.Net

Thanks to raulsf6, deanbennettdeveloper and keesschollaart81

The yolo v5 model you'll find in this branch look to be similar yo yours, with the [1, 25200, xx] layer.

Let me know if this is useful

Westy-Dev commented 2 years ago

Hi @BobLd after some work, I realised I had pulled the wrong project in and now have detection working using the above. Many thanks for your hard work. I have now tried porting this to Hololens, where I wish to do the detection, but it seems the onnxruntime isn't supported by this device. So I am now looking into Windows.AI.MachineLearning to do this.

While this is separate to this issue I have posted above, do you know how the model format version is produced? For example my version is ONNXv7 but I need it to be v4 or v5.

Many thanks.

Westy-Dev commented 2 years ago

image

BobLd commented 2 years ago

Hi @Westycoot, I think what your are refering to is the opset_version. When you convert your model to onnx format, you should be able to set the version, e.g.:

    torch.onnx.export(model,
                      [image], 
                      model_path, 
                  ->  opset_version=11,
                      do_constant_folding=True,
                      ...)

I've used that here.

Also, I guess you might be able to translate one onnx model to another in a different opset_version, I've never done that. Maybe try to google that

AshwinRaikar88 commented 2 years ago

Hi @Westycoot try setting offset version to 12 .....currently ML.Net framework does not support opset version 13 The inference script is pretty similar to yolov5 small you will just need to change output dimensions at a few places in the files

You can safely ignore those extra output dimensions like 454, 495 and 536 if you only want the box predictions

Please refer to this inference script to get an idea https://github.com/BobLd/YOLOv4MLNet/blob/yolo-v5-incl/YOLOv4MLNet/DataStructures/YoloV4Prediction.cs