ultralytics / yolov5

YOLOv5 πŸš€ in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.03k stars 16.17k forks source link

TFLite, ONNX, CoreML, TensorRT Export #251

Open glenn-jocher opened 4 years ago

glenn-jocher commented 4 years ago

πŸ“š This guide explains how to export a trained YOLOv5 πŸš€ model from PyTorch to ONNX and TorchScript formats. UPDATED 8 December 2022.

Before You Start

Clone repo and install requirements.txt in a Python>=3.7.0 environment, including PyTorch>=1.7. Models and datasets download automatically from the latest YOLOv5 release.

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

For TensorRT export example (requires GPU) see our Colab notebook appendix section. Open In Colab

Formats

YOLOv5 inference is officially supported in 11 formats:

πŸ’‘ ProTip: Export to ONNX or OpenVINO for up to 3x CPU speedup. See CPU Benchmarks. πŸ’‘ ProTip: Export to TensorRT for up to 5x GPU speedup. See GPU Benchmarks.

Format export.py --include Model
PyTorch - yolov5s.pt
TorchScript torchscript yolov5s.torchscript
ONNX onnx yolov5s.onnx
OpenVINO openvino yolov5s_openvino_model/
TensorRT engine yolov5s.engine
CoreML coreml yolov5s.mlmodel
TensorFlow SavedModel saved_model yolov5s_saved_model/
TensorFlow GraphDef pb yolov5s.pb
TensorFlow Lite tflite yolov5s.tflite
TensorFlow Edge TPU edgetpu yolov5s_edgetpu.tflite
TensorFlow.js tfjs yolov5s_web_model/
PaddlePaddle paddle yolov5s_paddle_model/

Benchmarks

Benchmarks below run on a Colab Pro with the YOLOv5 tutorial notebook Open In Colab. To reproduce:

python benchmarks.py --weights yolov5s.pt --imgsz 640 --device 0

Colab Pro V100 GPU

benchmarks: weights=/content/yolov5/yolov5s.pt, imgsz=640, batch_size=1, data=/content/yolov5/data/coco128.yaml, device=0, half=False, test=False
Checking setup...
YOLOv5 πŸš€ v6.1-135-g7926afc torch 1.10.0+cu111 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB)
Setup complete βœ… (8 CPUs, 51.0 GB RAM, 46.7/166.8 GB disk)

Benchmarks complete (458.07s)
                   Format  mAP@0.5:0.95  Inference time (ms)
0                 PyTorch        0.4623                10.19
1             TorchScript        0.4623                 6.85
2                    ONNX        0.4623                14.63
3                OpenVINO           NaN                  NaN
4                TensorRT        0.4617                 1.89
5                  CoreML           NaN                  NaN
6   TensorFlow SavedModel        0.4623                21.28
7     TensorFlow GraphDef        0.4623                21.22
8         TensorFlow Lite           NaN                  NaN
9     TensorFlow Edge TPU           NaN                  NaN
10          TensorFlow.js           NaN                  NaN

Colab Pro CPU

benchmarks: weights=/content/yolov5/yolov5s.pt, imgsz=640, batch_size=1, data=/content/yolov5/data/coco128.yaml, device=cpu, half=False, test=False
Checking setup...
YOLOv5 πŸš€ v6.1-135-g7926afc torch 1.10.0+cu111 CPU
Setup complete βœ… (8 CPUs, 51.0 GB RAM, 41.5/166.8 GB disk)

Benchmarks complete (241.20s)
                   Format  mAP@0.5:0.95  Inference time (ms)
0                 PyTorch        0.4623               127.61
1             TorchScript        0.4623               131.23
2                    ONNX        0.4623                69.34
3                OpenVINO        0.4623                66.52
4                TensorRT           NaN                  NaN
5                  CoreML           NaN                  NaN
6   TensorFlow SavedModel        0.4623               123.79
7     TensorFlow GraphDef        0.4623               121.57
8         TensorFlow Lite        0.4623               316.61
9     TensorFlow Edge TPU           NaN                  NaN
10          TensorFlow.js           NaN                  NaN

Export a Trained YOLOv5 Model

This command exports a pretrained YOLOv5s model to TorchScript and ONNX formats. yolov5s.pt is the 'small' model, the second smallest model available. Other options are yolov5n.pt, yolov5m.pt, yolov5l.pt and yolov5x.pt, along with their P6 counterparts i.e. yolov5s6.pt or you own custom training checkpoint i.e. runs/exp/weights/best.pt. For details on all available models please see our README table.

python export.py --weights yolov5s.pt --include torchscript onnx

πŸ’‘ ProTip: Add --half to export models at FP16 half precision for smaller file sizes

Output:

export: data=data/coco128.yaml, weights=['yolov5s.pt'], imgsz=[640, 640], batch_size=1, device=cpu, half=False, inplace=False, train=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['torchscript', 'onnx']
YOLOv5 πŸš€ v6.2-104-ge3e5122 Python-3.7.13 torch-1.12.1+cu113 CPU

Downloading https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s.pt to yolov5s.pt...
100% 14.1M/14.1M [00:00<00:00, 274MB/s]

Fusing layers... 
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients

PyTorch: starting from yolov5s.pt with output shape (1, 25200, 85) (14.1 MB)

TorchScript: starting export with torch 1.12.1+cu113...
TorchScript: export success βœ… 1.7s, saved as yolov5s.torchscript (28.1 MB)

ONNX: starting export with onnx 1.12.0...
ONNX: export success βœ… 2.3s, saved as yolov5s.onnx (28.0 MB)

Export complete (5.5s)
Results saved to /content/yolov5
Detect:          python detect.py --weights yolov5s.onnx 
Validate:        python val.py --weights yolov5s.onnx 
PyTorch Hub:     model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.onnx')
Visualize:       https://netron.app/

The 3 exported models will be saved alongside the original PyTorch model:

Netron Viewer is recommended for visualizing exported models:

Exported Model Usage Examples

detect.py runs inference on exported models:

python detect.py --weights yolov5s.pt                 # PyTorch
                           yolov5s.torchscript        # TorchScript
                           yolov5s.onnx               # ONNX Runtime or OpenCV DNN with --dnn
                           yolov5s_openvino_model     # OpenVINO
                           yolov5s.engine             # TensorRT
                           yolov5s.mlmodel            # CoreML (macOS only)
                           yolov5s_saved_model        # TensorFlow SavedModel
                           yolov5s.pb                 # TensorFlow GraphDef
                           yolov5s.tflite             # TensorFlow Lite
                           yolov5s_edgetpu.tflite     # TensorFlow Edge TPU
                           yolov5s_paddle_model       # PaddlePaddle

val.py runs validation on exported models:

python val.py --weights yolov5s.pt                 # PyTorch
                        yolov5s.torchscript        # TorchScript
                        yolov5s.onnx               # ONNX Runtime or OpenCV DNN with --dnn
                        yolov5s_openvino_model     # OpenVINO
                        yolov5s.engine             # TensorRT
                        yolov5s.mlmodel            # CoreML (macOS Only)
                        yolov5s_saved_model        # TensorFlow SavedModel
                        yolov5s.pb                 # TensorFlow GraphDef
                        yolov5s.tflite             # TensorFlow Lite
                        yolov5s_edgetpu.tflite     # TensorFlow Edge TPU
                        yolov5s_paddle_model       # PaddlePaddle

Use PyTorch Hub with exported YOLOv5 models:

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.pt')
                                                       'yolov5s.torchscript ')       # TorchScript
                                                       'yolov5s.onnx')               # ONNX Runtime
                                                       'yolov5s_openvino_model')     # OpenVINO
                                                       'yolov5s.engine')             # TensorRT
                                                       'yolov5s.mlmodel')            # CoreML (macOS Only)
                                                       'yolov5s_saved_model')        # TensorFlow SavedModel
                                                       'yolov5s.pb')                 # TensorFlow GraphDef
                                                       'yolov5s.tflite')             # TensorFlow Lite
                                                       'yolov5s_edgetpu.tflite')     # TensorFlow Edge TPU
                                                       'yolov5s_paddle_model')       # PaddlePaddle

# Images
img = 'https://ultralytics.com/images/zidane.jpg'  # or file, Path, PIL, OpenCV, numpy, list

# Inference
results = model(img)

# Results
results.print()  # or .show(), .save(), .crop(), .pandas(), etc.

OpenCV DNN inference

OpenCV inference with ONNX models:

python export.py --weights yolov5s.pt --include onnx

python detect.py --weights yolov5s.onnx --dnn  # detect
python val.py --weights yolov5s.onnx --dnn  # validate

C++ Inference

YOLOv5 OpenCV DNN C++ inference on exported ONNX model examples:

YOLOv5 OpenVINO C++ inference examples:

TensorFlow.js Web Browser Inference

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

mrljwlm commented 3 years ago

when i try to export the model using the latest code i got the following error: Model generate failure: Exporting the operator silu to ONNX opset version 12 is not supported. Please open a bug to request ONNX export support for the missing operator. the activation function silu is not supported, how can i solve it?

i use python 3.8 pytorch 1.7.0 onnx 1.8.0

glenn-jocher commented 3 years ago

@mrljwlm ONNX export works correctly. Last scheduled check was run 3 hours ago as part of CI tests. A failure in ONNX export would fail the entire test. https://github.com/ultralytics/yolov5/runs/1745848687?check_suite_focus=true

rush9838465 commented 3 years ago

I'm loading mlmodel in Xcode to prompt this error. Anyone know what the reason is? image

gohguodong commented 3 years ago

Hi, i am trying to export an ONNX model from the pytorch model.

however, i ran the export.py but the ONNX was not generated. only the torchscript model was generated.

it did not flag any error, but there there was just no ONNX file generated. any idea why?

image

glenn-jocher commented 3 years ago

@gohguodong not sure, try exporting in a verified environment:

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

glenn-jocher commented 3 years ago

I've opened a feature request on apple/coremltools for PyTorch nn.SiLU() support, as the recent coremltools 4.1 release still does not support it: https://github.com/apple/coremltools/issues/1099

satheeshkatipomu commented 3 years ago

@zhiqwang , I see you were able to run batch inference in your example notebook in your repo. I followed the same steps. But it's not working for me. The only difference I see I am exporting ultralytics yolov5x, but you were loading the model using yolov5_onnx.

The error I am getting:

Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Concat node. Name:'Concat_40' Status Message: Not satisfied: dim_value == inputs_0_dims[axis_index]
concat.cc:87 PrepareForComputeNon concat axis dimensions must match: Axis 2 has mismatched dimensions of 383 and 384

Export code:

input_names = ['images']
dynamic_axes= {'images':[0,1,2,3]}
torch.onnx.export(model,
                  (img,),
                  f,
                  verbose=False,
                  do_constant_folding=True,
                  opset_version=_onnx_opset_version,
                  input_names=input_names,
                  output_names=['classes', 'boxes'] if y is None else ['output'],
                  dynamic_axes=dynamic_axes)
zhiqwang commented 3 years ago

Hi @satheeshkatipomu , I think that there are two thing missed in my repo.

BTW, do you try this notebook, and if there are something wrong here?

pravastacaraka commented 3 years ago

is it working with onnx.js>

Hi @FahriBilici , it could be working with onnx.js, but I didn't find a good example :(

@FahriBilici @zhiqwang Hello, have you tried it on ONNX.JS?

zhiqwang commented 3 years ago

@FahriBilici @zhiqwang Hello, have you tried it on ONNX.JS?

Hi @pravastacaraka , I haven't tried.

mozeqiu commented 3 years ago

python3 models/export.py --weights best.pt --img 416--batch 1

when i export ONNX model,i get some problem like this Converting Frontend ==> MIL Ops: 3%|β–ˆβ–ˆβ–‰ | 21/620 [00:00<00:00, 1358.32 ops/s] CoreML export failure: unexpected number of inputs for node x.2 (_convolution): 13 Export complete (4.27s). Visualize with https://github.com/lutzroeder/netron.

i have no idea,please help

bobbilichandu commented 3 years ago

Model generate failure: Exporting the operator silu to ONNX opset version 12 is not supported. Please open a bug to request ONNX export support for the missing operator.

@mrljwlm were you able to solve this issue?

RobinBram commented 3 years ago

How do I export it to handle rectangular images? Do I just put in the largest of width or height or can I specify both?

glenn-jocher commented 3 years ago

@RobinBram this is shown in the argparser: https://github.com/ultralytics/yolov5/blob/fab5085674f7748dc16d7ca25afb225fa441bc9d/models/export.py#L24

RobinBram commented 3 years ago

@glenn-jocher I looked there and it's a bit confusing. I had to put the --img 640 480 and not --img [640,480]. It works now though.

issamemari commented 3 years ago

@glenn-jocher Is there a way to export YOLOv5 to ONNX or TorchScript in a way that supports inference on a variable batch size?

glenn-jocher commented 3 years ago

@issamemari yes, ONNX export support dynamic axes thanks to a recent PR https://github.com/ultralytics/yolov5/pull/2208 https://github.com/ultralytics/yolov5/blob/886f1c03d839575afecb059accf74296fad395b6/models/export.py#L27

ghost commented 3 years ago

@mozeqiu

python3 models/export.py --weights best.pt --img 416--batch 1

when i export ONNX model,i get some problem like this Converting Frontend ==> MIL Ops: 3%|β–ˆβ–ˆβ–‰ | 21/620 [00:00<00:00, 1358.32 ops/s] CoreML export failure: unexpected number of inputs for node x.2 (_convolution): 13 Export complete (4.27s). Visualize with https://github.com/lutzroeder/netron.

i have no idea,please help

Note from @mzkaramat

Head branch from coremltool repo which is compatible with PyTorch 1.7 pip install -Uqq git+https://github.com/apple/coremltools.git@master

I just confirmed this worked for PyTorch 1.8 as well.

matthewchung74 commented 3 years ago

are there any examples of how to run the torchscript file and interpret the outputs? I was thinking I could copy autoShape. forward() https://github.com/ultralytics/yolov5/blob/3551b072b366989b82b3777c63ea485a99e0bf90/models/common.py#L182 and replace the model with my torchscript file, but was was getting some errors, since it references a self.stride, which I don't see declared. https://github.com/ultralytics/yolov5/blob/3551b072b366989b82b3777c63ea485a99e0bf90/models/common.py#L211

glenn-jocher commented 3 years ago

@matthewchung74 sorry we don't have any torchscript inference examples. This is a bit of a misconception that the torchscript models can be passed to detect.py or pytorch hub for inference. They don't work this way, it's my understanding that they are intended for their own C++ inference environment, though I can't really help as I've not used them this way myself.

Their main use case to Ultralytics is for follow-on CoreML export in our own workflows. https://github.com/ultralytics/yolov5/blob/886f1c03d839575afecb059accf74296fad395b6/models/export.py#L96

Later this year we are planning on better addressing these downstream tasks to provide better support for the most common export pipelines hopefully.

matthewchung74 commented 3 years ago

thanks @glenn-jocher . I'm going to try to work on this for a bit longer, and if I get a working example, i'd be glad to share. ... you don't by chance know what this self.stride value is supposed to be, do you? i can't figure out where it is set. https://github.com/ultralytics/yolov5/blob/3551b072b366989b82b3777c63ea485a99e0bf90/models/common.py#L211

glenn-jocher commented 3 years ago

@matthewchung74 yes a tutorial would be great to help everyone out once you figure things out!

stride is a Detect() layer attribute, it is not defined on init because it requires a forward pass to determine it's value. For the standard P5 models the stride tensor should be [8, 16, 32], and for P6 models it would be [8, 16, 32, 64]. https://github.com/ultralytics/yolov5/blob/886f1c03d839575afecb059accf74296fad395b6/models/yolo.py#L87-L97

matthewchung74 commented 3 years ago

ah, I was staring at the code for too long. I did not see the list comprehension.

matthewchung74 commented 3 years ago

@glenn-jocher like you said and what I found out the hard way, it's definitely not as easy as swapping out the model for the torchscript model. my output shapes between the two are very different and its not as simple as reshaping. I have my work here. https://colab.research.google.com/drive/1sLXGBTJrXlB1s2vLsx02eb-JBxd2w4Hw#scrollTo=JURQfbv_4UUP but will pause on this project unless some new info comes up.

glenn-jocher commented 3 years ago

@matthewchung74 right. I'd mainly point you to the autoShape() class, which does all the extra work required to get real-world results from a raw YOLO model: preprocess -> inference - > postprocess. Most of our open-source export pipelines will solve the inference part but do not provide the pre and post processing steps, which will vary by deployment environment.

glenn-jocher commented 3 years ago

@matthewchung74 forgot the link: https://github.com/ultralytics/yolov5/blob/747c2653eecfb870b1ed40b1e00e0ef209b036e9/models/common.py#L168-L169

matthewchung74 commented 3 years ago

but if i use your detect https://github.com/ultralytics/yolov5/blob/747c2653eecfb870b1ed40b1e00e0ef209b036e9/detect.py function, which is mostly what I am doing, isn't most of the autoshape functionality there? or am I misreading?

glenn-jocher commented 3 years ago

@matthewchung74 yes, detect.py performs the same functions in a different way (it was developed earlier), and of course is also paired with dataloaders for various media formats.

Sn0flingan commented 3 years ago

I could not get onnx to install on python=3.9 using miniconda on WSL2 running Ubuntu 18.04. Errors concerning cmake. However python=3.8 worked fine. If others have similar issues on other platforms perhaps the python version requirement should be changed?

Sn0flingan commented 3 years ago

Note: The --grid and --device options exist to use to get the bounding boxes with scores using GPU.

If you only need the bounding boxes, scores and labels and want a pure tensor rather than a tuple as output you can change it manually. In the return value of the forward function in the Detect class in yolo.py in the models folder replace return x if self.training else (torch.cat(z, 1), x) with return x if self.training else torch.cat(z, 1). This is great if you want to use Triton as an inference motor as Triton does not accept a tuple as output.

XDynames commented 3 years ago

Works great on CPU but now when I try to export it for the GPU I get: ONNX: export failure: Input, output and indices must be on the current device

Torchscript runs fine and I have tried looking into the model files as mentioned above but it looks like there are no operation where tensors are moved in the trace

Any help would be appreciated

glenn-jocher commented 3 years ago

@XDynames ONNX GPU export works correctly in our CI tests. If you believe you have a reproducible bug, we suggest you raise a new issue using the πŸ› Bug Report template, providing screenshots and a minimum reproducible example to help us better understand and diagnose your problem. Thank you!

Sn0flingan commented 3 years ago

@XDynames and @glenn-jocher ONNX GPU export does not work for me either. Haven't looked into it since I do not need ONNX and Torchscript works fine. So it might be worth making a bug report since it seems somewhat reproducible.

pk-1196 commented 3 years ago

@glenn-jocher Hii , torchscript runs fine after that ONNX doesn't runs well here only

ONNX: starting export with onnx 1.9.0...

the above line prints and execution is stopped after that . How to save .onnx file any help will be appreciated .

MysteriousTail commented 3 years ago

@glenn-jocher Hii , torchscript runs fine after that ONNX doesn't runs well here only

ONNX: starting export with onnx 1.9.0...

the above line prints and execution is stopped after that . How to save .onnx file any help will be appreciated .

same problem, load the same environment in a ubuntu64 VM and get .onnx successfully. I guess there is something incompatible on Windows.

SpongeBab commented 3 years ago

@glenn-jocher hi,I'm sorry to bother you. When I export,I meet this problem:

TorchScript: starting export with torch 1.8.1+cu102...
/home/xiaopeng/YOLOv5/yolov5/models/yolo.py:51: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if self.grid[i].shape[2:4] != x[i].shape[2:4] or self.onnx_dynamic:

Can you give me some help?Thank you so much~

And now scikit-learn==0.19.2 cannot be installed by using pip. Like this:

    Running setup.py install for scikit-learn ... done
  DEPRECATION: scikit-learn was installed using the legacy 'setup.py install' method, because a wheel could not be built for it. A possible replacement is to fix the wheel build issue reported above. You can find discussion regarding this at https://github.com/pypa/pip/issues/8368.

Can you update the scikit-learn dependent library version.

Sn0flingan commented 3 years ago

@glenn-jocher hi,I'm sorry to bother you. When I export,I meet this problem:

TorchScript: starting export with torch 1.8.1+cu102...
/home/xiaopeng/YOLOv5/yolov5/models/yolo.py:51: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if self.grid[i].shape[2:4] != x[i].shape[2:4] or self.onnx_dynamic:

Can you give me some help?Thank you so much~

And now scikit-learn==0.19.2 cannot be installed by using pip. Like this:

    Running setup.py install for scikit-learn ... done
  DEPRECATION: scikit-learn was installed using the legacy 'setup.py install' method, because a wheel could not be built for it. A possible replacement is to fix the wheel build issue reported above. You can find discussion regarding this at https://github.com/pypa/pip/issues/8368.

Can you update the scikit-learn dependent library version.

Have you tried the traced model? I get similar output but the trace still works. The error is due to Torchscript trace not supporting booleans (e.g. if-else statements). Look into the Torchscript tutorial if you need to understand more on this and how it can be fixed.

GrimReaperSam commented 3 years ago

Is it possible to export coreml with flexible inputs?

SpongeBab commented 3 years ago

@glenn-jocher hi,I'm sorry to bother you. When I export,I meet this problem:

TorchScript: starting export with torch 1.8.1+cu102...
/home/xiaopeng/YOLOv5/yolov5/models/yolo.py:51: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if self.grid[i].shape[2:4] != x[i].shape[2:4] or self.onnx_dynamic:

Can you give me some help?Thank you so much~ And now scikit-learn==0.19.2 cannot be installed by using pip. Like this:

    Running setup.py install for scikit-learn ... done
  DEPRECATION: scikit-learn was installed using the legacy 'setup.py install' method, because a wheel could not be built for it. A possible replacement is to fix the wheel build issue reported above. You can find discussion regarding this at https://github.com/pypa/pip/issues/8368.

Can you update the scikit-learn dependent library version.

Have you tried the traced model? I get similar output but the trace still works. The error is due to Torchscript trace not supporting booleans (e.g. if-else statements). Look into the Torchscript tutorial if you need to understand more on this and how it can be fixed.

oh,yes.It still work. I also want to know whether the traced model can work with this output.

DefTruth commented 3 years ago

An implementation YoloV5 with onnxruntime c++ can find at yolov5.cpp

midasklr commented 3 years ago

Hello everybody. I've converted pytorch model to ncnn with following link. https://zhuanlan.zhihu.com/p/275989233 But It was different from origin ncnn model https://github.com/nihui/ncnn-android-yolov5 and I can't get the results with the new ncnn model. Any help was appreciated. Thanks

which branch do u use? I use v5 and got some problems too ...

midasklr commented 3 years ago

Hello @midasklr Could you share your ncnn param file?

7767517 176 200 Input images 0 1 images YoloV5Focus focus 1 1 images 167 Convolution Conv_41 1 1 167 168 0=32 1=3 4=1 5=1 6=3456 Swish Mul_43 1 1 168 170 Convolution Conv_44 1 1 170 171 0=64 1=3 3=2 4=1 5=1 6=18432 Swish Mul_46 1 1 171 173 Split splitncnn_0 1 2 173 173_splitncnn_0 173_splitncnn_1 Convolution Conv_47 1 1 173_splitncnn_1 174 0=32 1=1 5=1 6=2048 Swish Mul_49 1 1 174 176 Split splitncnn_1 1 2 176 176_splitncnn_0 176_splitncnn_1 Convolution Conv_50 1 1 176_splitncnn_1 177 0=32 1=1 5=1 6=1024 Swish Mul_52 1 1 177 179 Convolution Conv_53 1 1 179 180 0=32 1=3 4=1 5=1 6=9216 Swish Mul_55 1 1 180 182 BinaryOp Add_56 2 1 176_splitncnn_0 182 183 Convolution Conv_57 1 1 173_splitncnn_0 184 0=32 1=1 5=1 6=2048 Swish Mul_59 1 1 184 186 Concat Concat_60 2 1 183 186 187 Convolution Conv_61 1 1 187 188 0=64 1=1 5=1 6=4096 Swish Mul_63 1 1 188 190 Convolution Conv_64 1 1 190 191 0=128 1=3 3=2 4=1 5=1 6=73728 Swish Mul_66 1 1 191 193 Split splitncnn_2 1 2 193 193_splitncnn_0 193_splitncnn_1 Convolution Conv_67 1 1 193_splitncnn_1 194 0=64 1=1 5=1 6=8192 Swish Mul_69 1 1 194 196 Split splitncnn_3 1 2 196 196_splitncnn_0 196_splitncnn_1 Convolution Conv_70 1 1 196_splitncnn_1 197 0=64 1=1 5=1 6=4096 Swish Mul_72 1 1 197 199 Convolution Conv_73 1 1 199 200 0=64 1=3 4=1 5=1 6=36864 Swish Mul_75 1 1 200 202 BinaryOp Add_76 2 1 196_splitncnn_0 202 203 Split splitncnn_4 1 2 203 203_splitncnn_0 203_splitncnn_1 Convolution Conv_77 1 1 203_splitncnn_1 204 0=64 1=1 5=1 6=4096 Swish Mul_79 1 1 204 206 Convolution Conv_80 1 1 206 207 0=64 1=3 4=1 5=1 6=36864 Swish Mul_82 1 1 207 209 BinaryOp Add_83 2 1 203_splitncnn_0 209 210 Split splitncnn_5 1 2 210 210_splitncnn_0 210_splitncnn_1 Convolution Conv_84 1 1 210_splitncnn_1 211 0=64 1=1 5=1 6=4096 Swish Mul_86 1 1 211 213 Convolution Conv_87 1 1 213 214 0=64 1=3 4=1 5=1 6=36864 Swish Mul_89 1 1 214 216 BinaryOp Add_90 2 1 210_splitncnn_0 216 217 Convolution Conv_91 1 1 193_splitncnn_0 218 0=64 1=1 5=1 6=8192 Swish Mul_93 1 1 218 220 Concat Concat_94 2 1 217 220 221 Convolution Conv_95 1 1 221 222 0=128 1=1 5=1 6=16384 Swish Mul_97 1 1 222 224 Split splitncnn_6 1 2 224 224_splitncnn_0 224_splitncnn_1 Convolution Conv_98 1 1 224_splitncnn_1 225 0=256 1=3 3=2 4=1 5=1 6=294912 Swish Mul_100 1 1 225 227 Split splitncnn_7 1 2 227 227_splitncnn_0 227_splitncnn_1 Convolution Conv_101 1 1 227_splitncnn_1 228 0=128 1=1 5=1 6=32768 Swish Mul_103 1 1 228 230 Split splitncnn_8 1 2 230 230_splitncnn_0 230_splitncnn_1 Convolution Conv_104 1 1 230_splitncnn_1 231 0=128 1=1 5=1 6=16384 Swish Mul_106 1 1 231 233 Convolution Conv_107 1 1 233 234 0=128 1=3 4=1 5=1 6=147456 Swish Mul_109 1 1 234 236 BinaryOp Add_110 2 1 230_splitncnn_0 236 237 Split splitncnn_9 1 2 237 237_splitncnn_0 237_splitncnn_1 Convolution Conv_111 1 1 237_splitncnn_1 238 0=128 1=1 5=1 6=16384 Swish Mul_113 1 1 238 240 Convolution Conv_114 1 1 240 241 0=128 1=3 4=1 5=1 6=147456 Swish Mul_116 1 1 241 243 BinaryOp Add_117 2 1 237_splitncnn_0 243 244 Split splitncnn_10 1 2 244 244_splitncnn_0 244_splitncnn_1 Convolution Conv_118 1 1 244_splitncnn_1 245 0=128 1=1 5=1 6=16384 Swish Mul_120 1 1 245 247 Convolution Conv_121 1 1 247 248 0=128 1=3 4=1 5=1 6=147456 Swish Mul_123 1 1 248 250 BinaryOp Add_124 2 1 244_splitncnn_0 250 251 Convolution Conv_125 1 1 227_splitncnn_0 252 0=128 1=1 5=1 6=32768 Swish Mul_127 1 1 252 254 Concat Concat_128 2 1 251 254 255 Convolution Conv_129 1 1 255 256 0=256 1=1 5=1 6=65536 Swish Mul_131 1 1 256 258 Split splitncnn_11 1 2 258 258_splitncnn_0 258_splitncnn_1 Convolution Conv_132 1 1 258_splitncnn_1 259 0=512 1=3 3=2 4=1 5=1 6=1179648 Swish Mul_134 1 1 259 261 Convolution Conv_135 1 1 261 262 0=256 1=1 5=1 6=131072 Swish Mul_137 1 1 262 264 Split splitncnn_12 1 4 264 264_splitncnn_0 264_splitncnn_1 264_splitncnn_2 264_splitncnn_3 Pooling MaxPool_138 1 1 264_splitncnn_3 265 1=5 3=2 5=1 Pooling MaxPool_139 1 1 264_splitncnn_2 266 1=9 3=4 5=1 Pooling MaxPool_140 1 1 264_splitncnn_1 267 1=13 3=6 5=1 Concat Concat_141 4 1 264_splitncnn_0 265 266 267 268 Convolution Conv_142 1 1 268 269 0=512 1=1 5=1 6=524288 Swish Mul_144 1 1 269 271 Split splitncnn_13 1 2 271 271_splitncnn_0 271_splitncnn_1 Convolution Conv_145 1 1 271_splitncnn_1 272 0=256 1=1 5=1 6=131072 Swish Mul_147 1 1 272 274 Convolution Conv_148 1 1 274 275 0=256 1=1 5=1 6=65536 Swish Mul_150 1 1 275 277 Convolution Conv_151 1 1 277 278 0=256 1=3 4=1 5=1 6=589824 Swish Mul_153 1 1 278 280 Convolution Conv_154 1 1 271_splitncnn_0 281 0=256 1=1 5=1 6=131072 Swish Mul_156 1 1 281 283 Concat Concat_157 2 1 280 283 284 Convolution Conv_158 1 1 284 285 0=512 1=1 5=1 6=262144 Swish Mul_160 1 1 285 287 Convolution Conv_161 1 1 287 288 0=256 1=1 5=1 6=131072 Swish Mul_163 1 1 288 290 Split splitncnn_14 1 2 290 290_splitncnn_0 290_splitncnn_1 Interp Resize_165 1 1 290_splitncnn_1 295 0=1 1=2.000000e+00 2=2.000000e+00 Concat Concat_166 2 1 295 258_splitncnn_0 296 Split splitncnn_15 1 2 296 296_splitncnn_0 296_splitncnn_1 Convolution Conv_167 1 1 296_splitncnn_1 297 0=128 1=1 5=1 6=65536 Swish Mul_169 1 1 297 299 Convolution Conv_170 1 1 299 300 0=128 1=1 5=1 6=16384 Swish Mul_172 1 1 300 302 Convolution Conv_173 1 1 302 303 0=128 1=3 4=1 5=1 6=147456 Swish Mul_175 1 1 303 305 Convolution Conv_176 1 1 296_splitncnn_0 306 0=128 1=1 5=1 6=65536 Swish Mul_178 1 1 306 308 Concat Concat_179 2 1 305 308 309 Convolution Conv_180 1 1 309 310 0=256 1=1 5=1 6=65536 Swish Mul_182 1 1 310 312 Convolution Conv_183 1 1 312 313 0=128 1=1 5=1 6=32768 Swish Mul_185 1 1 313 315 Split splitncnn_16 1 2 315 315_splitncnn_0 315_splitncnn_1 Interp Resize_187 1 1 315_splitncnn_1 320 0=1 1=2.000000e+00 2=2.000000e+00 Concat Concat_188 2 1 320 224_splitncnn_0 321 Split splitncnn_17 1 2 321 321_splitncnn_0 321_splitncnn_1 Convolution Conv_189 1 1 321_splitncnn_1 322 0=64 1=1 5=1 6=16384 Swish Mul_191 1 1 322 324 Convolution Conv_192 1 1 324 325 0=64 1=1 5=1 6=4096 Swish Mul_194 1 1 325 327 Convolution Conv_195 1 1 327 328 0=64 1=3 4=1 5=1 6=36864 Swish Mul_197 1 1 328 330 Convolution Conv_198 1 1 321_splitncnn_0 331 0=64 1=1 5=1 6=16384 Swish Mul_200 1 1 331 333 Concat Concat_201 2 1 330 333 334 Convolution Conv_202 1 1 334 335 0=128 1=1 5=1 6=16384 Swish Mul_204 1 1 335 337 Split splitncnn_18 1 2 337 337_splitncnn_0 337_splitncnn_1 Convolution Conv_205 1 1 337_splitncnn_1 338 0=128 1=3 3=2 4=1 5=1 6=147456 Swish Mul_207 1 1 338 340 Concat Concat_208 2 1 340 315_splitncnn_0 341 Split splitncnn_19 1 2 341 341_splitncnn_0 341_splitncnn_1 Convolution Conv_209 1 1 341_splitncnn_1 342 0=128 1=1 5=1 6=32768 Swish Mul_211 1 1 342 344 Convolution Conv_212 1 1 344 345 0=128 1=1 5=1 6=16384 Swish Mul_214 1 1 345 347 Convolution Conv_215 1 1 347 348 0=128 1=3 4=1 5=1 6=147456 Swish Mul_217 1 1 348 350 Convolution Conv_218 1 1 341_splitncnn_0 351 0=128 1=1 5=1 6=32768 Swish Mul_220 1 1 351 353 Concat Concat_221 2 1 350 353 354 Convolution Conv_222 1 1 354 355 0=256 1=1 5=1 6=65536 Swish Mul_224 1 1 355 357 Split splitncnn_20 1 2 357 357_splitncnn_0 357_splitncnn_1 Convolution Conv_225 1 1 357_splitncnn_1 358 0=256 1=3 3=2 4=1 5=1 6=589824 Swish Mul_227 1 1 358 360 Concat Concat_228 2 1 360 290_splitncnn_0 361 Split splitncnn_21 1 2 361 361_splitncnn_0 361_splitncnn_1 Convolution Conv_229 1 1 361_splitncnn_1 362 0=256 1=1 5=1 6=131072 Swish Mul_231 1 1 362 364 Convolution Conv_232 1 1 364 365 0=256 1=1 5=1 6=65536 Swish Mul_234 1 1 365 367 Convolution Conv_235 1 1 367 368 0=256 1=3 4=1 5=1 6=589824 Swish Mul_237 1 1 368 370 Convolution Conv_238 1 1 361_splitncnn_0 371 0=256 1=1 5=1 6=131072 Swish Mul_240 1 1 371 373 Concat Concat_241 2 1 370 373 374 Convolution Conv_242 1 1 374 375 0=512 1=1 5=1 6=262144 Swish Mul_244 1 1 375 377 Convolution Conv_245 1 1 337_splitncnn_0 378 0=255 1=1 5=1 6=32640 Reshape Reshape_259 1 1 378 396 0=-1 1=85 2=3 Permute Transpose_260 1 1 396 397 0=1 Convolution Conv_261 1 1 357_splitncnn_0 398 0=255 1=1 5=1 6=65280 Reshape Reshape_275 1 1 398 416 0=-1 1=85 2=3 Permute Transpose_276 1 1 416 417 0=1 Convolution Conv_277 1 1 377 418 0=255 1=1 5=1 6=130560 Reshape Reshape_291 1 1 418 436 0=-1 1=85 2=3 Permute Transpose_292 1 1 436 437 0=1

midasklr commented 3 years ago

Hello @midasklr Could you share your ncnn param file?

https://github.com/midasklr/yolov5ncnn

pk-1196 commented 3 years ago

hii i am downloading the repo and running the model/export file ,without changing anything but getting below error , Traceback (most recent call last): File "models/export.py", line 19, in from models.experimental import attempt_load File "C:\Users\onnx\YOLOv5-master\models\experimental.py", line 7, in from models.common import Conv, DWConv File "C:\Users\onnx\YOLOv5-master\models\common.py", line 15, in from utils.datasets import letterbox ModuleNotFoundError: No module named 'utils.datasets'

Any help will be appreciated

glenn-jocher commented 3 years ago

@pk-1196 πŸ‘‹ hi, thanks for letting us know about this problem with YOLOv5 πŸš€. We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

In addition to the above requirements, for Ultralytics to provide assistance your code should be:

If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the πŸ› Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! πŸ˜ƒ

arvidius commented 3 years ago

Hello I tried to export my model.pt to the ONNX format but I only get the output:

TorchScript: export success, saved as ../best.torchscript.pt (28.6 MB) ONNX: starting export with onnx 1.9.0...

The TorchScript is actually saved in my directory, but the ONNX doesn't seem to do anything?!

glenn-jocher commented 3 years ago

@arvidius πŸ‘‹ hi, thanks for letting us know about this problem with YOLOv5 πŸš€. We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

In addition to the above requirements, for Ultralytics to provide assistance your code should be:

If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the πŸ› Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! πŸ˜ƒ

ashReal commented 3 years ago

Why the model performance effects badly after converting weights to onnx, it started giving lot of false positive which are not present if i run torch weights directly,

glenn-jocher commented 3 years ago

@ashReal πŸ‘‹ hi, thanks for letting us know about this problem with YOLOv5 πŸš€. We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

In addition to the above requirements, for Ultralytics to provide assistance your code should be:

If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the πŸ› Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! πŸ˜ƒ

pk-1196 commented 3 years ago

Why the model performance effects badly after converting weights to onnx, it started giving lot of false positive which are not present if i run torch weights directly,

i have also experienced that