ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.18k stars 16.2k forks source link

Convert Yolov5 to IR Model in Openvino #5533

Closed aseprohman closed 2 years ago

aseprohman commented 2 years ago

Search before asking

Question

Hello @glenn-jocher et all,

Has anyone ever done the conversion of yolov5 models to IR models in openvino? maybe there is a tutorial i can learn? I want to try deploying yolov5 on Intel NCS2/VPU hardware devices. thanks

Additional

No response

glenn-jocher commented 2 years ago

@aseprohman what is an IR model?

Guemann-ui commented 2 years ago

Hi, @aseprohman I've done the same work! how can I help?

aseprohman commented 2 years ago

@aseprohman what is an IR model?

@glenn-jocher IR is short from Intermediate Representative model that is the OpenVino model format for inferencing model into targetted devices

aseprohman commented 2 years ago

Hi, @aseprohman I've done the same work! how can I help?

hi @besmaGuesmi, can you share to me about model conversion technique from .pt or onnx file to IR Models ? how to pass any parameter like input_shape etc ? if I train yolov5m.pt model with parameter like this : batch: 32 imgsz: 416 class: 1 thanks

Guemann-ui commented 2 years ago

Hi, @aseprohman first you have to convert your model from PyTorch to the onnx format, then to the IR format.

  1. create virtual env with Python 3.6
  2. Clone yolov5 repo
  3. python export.py –weights model.pt –img 640 –batch 1
    
    `# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
    """
    Export a PyTorch model to TorchScript, ONNX, CoreML formats

Usage: $ python path/to/export.py --weights yolov5s.pt --img 640 --batch 1 """

import argparse import sys import time from pathlib import Path

import torch import torch.nn as nn from torch.utils.mobile_optimizer import optimize_for_mobile

FILE = Path(file).absolute() sys.path.append(FILE.parents[0].as_posix()) # add yolov5/ to path sys.path.insert(0, './yolov5') from models.common import Conv from models.yolo import Detect from models.experimental import attempt_load from utils.activations import Hardswish, SiLU from utils.general import colorstr, check_img_size, check_requirements, file_size, set_logging from utils.torch_utils import select_device

def export_torchscript(model, img, file, optimize):

TorchScript model export

prefix = colorstr('TorchScript:')
try:
    print(f'\n{prefix} starting export with torch {torch.__version__}...')
    f = file.with_suffix('.torchscript.pt')
    ts = torch.jit.trace(model, img, strict=False)
    (optimize_for_mobile(ts) if optimize else ts).save(f)
    print(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
    return ts
except Exception as e:
    print(f'{prefix} export failure: {e}')

def export_onnx(model, img, file, opset, train, dynamic, simplify):

ONNX model export

prefix = colorstr('ONNX:')
try:
    check_requirements(('onnx', 'onnx-simplifier'))
    import onnx

    print(f'\n{prefix} starting export with onnx {onnx.__version__}...')
    f = file.with_suffix('.onnx')
    torch.onnx.export(model, img, f, verbose=False, opset_version=opset,
                      training=torch.onnx.TrainingMode.TRAINING if train else torch.onnx.TrainingMode.EVAL,
                      do_constant_folding=not train,
                      input_names=['images'],
                      output_names=['output'],
                      dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'},  # shape(1,3,640,640)
                                    'output': {0: 'batch', 1: 'anchors'}  # shape(1,25200,85)
                                    } if dynamic else None)

    # Checks
    model_onnx = onnx.load(f)  # load onnx model
    onnx.checker.check_model(model_onnx)  # check onnx model
    # print(onnx.helper.printable_graph(model_onnx.graph))  # print

    # Simplify
    if simplify:
        try:
            import onnxsim

            print(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...')
            model_onnx, check = onnxsim.simplify(
                model_onnx,
                dynamic_input_shape=dynamic,
                input_shapes={'images': list(img.shape)} if dynamic else None)
            assert check, 'assert check failed'
            onnx.save(model_onnx, f)
        except Exception as e:
            print(f'{prefix} simplifier failure: {e}')
    print(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
    print(f"{prefix} run --dynamic ONNX model inference with: 'python detect.py --weights {f}'")
except Exception as e:
    print(f'{prefix} export failure: {e}')

def export_coreml(model, img, file):

CoreML model export

prefix = colorstr('CoreML:')
try:
    check_requirements(('coremltools',))
    import coremltools as ct

    print(f'\n{prefix} starting export with coremltools {ct.__version__}...')
    f = file.with_suffix('.mlmodel')
    model.train()  # CoreML exports should be placed in model.train() mode
    ts = torch.jit.trace(model, img, strict=False)  # TorchScript model
    model = ct.convert(ts, inputs=[ct.ImageType('image', shape=img.shape, scale=1 / 255.0, bias=[0, 0, 0])])
    model.save(f)
    print(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
except Exception as e:
    print(f'\n{prefix} export failure: {e}')

def run(weights='model.pt', # weights path img_size=(640, 640), # image (height, width) batch_size=1, # batch size device='cpu', # cuda device, i.e. 0 or 0,1,2,3 or cpu include=('torchscript', 'onnx', 'coreml'), # include formats half=False, # FP16 half-precision export inplace=True, # set YOLOv5 Detect() inplace=True train=False, # model.train() mode optimize=False, # TorchScript: optimize for mobile dynamic=False, # ONNX: dynamic axes simplify=False, # ONNX: simplify model opset=10, # ONNX: opset version ): t = time.time() include = [x.lower() for x in include] img_size *= 2 if len(img_size) == 1 else 1 # expand file = Path(weights)

# Load PyTorch model
device = select_device(device)
assert not (device.type == 'cpu' and half), '--half only compatible with GPU export, i.e. use --device 0'
model = attempt_load(weights, map_location=device)  # load FP32 model
names = model.names

# Input
gs = int(max(model.stride))  # grid size (max stride)
img_size = [check_img_size(x, gs) for x in img_size]  # verify img_size are gs-multiples
img = torch.zeros(batch_size, 3, *img_size).to(device)  # image size(1,3,320,192) iDetection

# Update model
if half:
    img, model = img.half(), model.half()  # to FP16
model.train() if train else model.eval()  # training mode = no Detect() layer grid construction
for k, m in model.named_modules():
    if isinstance(m, Conv):  # assign export-friendly activations
        if isinstance(m.act, nn.Hardswish):
            m.act = Hardswish()
        elif isinstance(m.act, nn.SiLU):
            m.act = SiLU()
    elif isinstance(m, Detect):
        m.inplace = inplace
        m.onnx_dynamic = dynamic
        # m.forward = m.forward_export  # assign forward (optional)

for _ in range(2):
    y = model(img)  # dry runs
print(f"\n{colorstr('PyTorch:')} starting from {weights} ({file_size(weights):.1f} MB)")

# Exports
if 'torchscript' in include:
    export_torchscript(model, img, file, optimize)
if 'onnx' in include:
    export_onnx(model, img, file, opset, train, dynamic, simplify)
if 'coreml' in include:
    export_coreml(model, img, file)

# Finish
print(f'\nExport complete ({time.time() - t:.2f}s)'
      f"\nResults saved to {colorstr('bold', file.parent.resolve())}"
      f'\nVisualize with https://netron.app')

def parse_opt(): parser = argparse.ArgumentParser() parser.add_argument('--weights', type=str, default='model.pt', help='weights path') parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='image (height, width)') parser.add_argument('--batch-size', type=int, default=1, help='batch size') parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--include', nargs='+', default=['torchscript', 'onnx', 'coreml'], help='include formats') parser.add_argument('--half', action='store_true', help='FP16 half-precision export') parser.add_argument('--inplace', action='store_true', help='set YOLOv5 Detect() inplace=True') parser.add_argument('--train', action='store_true', help='model.train() mode') parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile') parser.add_argument('--dynamic', action='store_true', help='ONNX: dynamic axes') parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model') parser.add_argument('--opset', type=int, default=12, help='ONNX: opset version') opt = parser.parse_args() return opt

def main(opt): set_logging() print(colorstr('export: ') + ', '.join(f'{k}={v}' for k, v in vars(opt).items())) run(**vars(opt))

if name == "main": opt = parse_opt() main(opt) `


4. Run openvino environment :  $cd Program Files/Intel/openvino/bin/ the run $setupvars.bat
5. Convert ONNX file to IR format : $cd Program Files/Intel/openvino/deployment tools/model optimizer
$ python mo onnx.py –input model model.onnx –model name output -s 255
–reverse input channels –output Conv 339,Conv 291,Conv 243 (please use Netron to check layers names)

if it didn't work, don't hesitate to send me an email: bessmagsm@gmail.com or ask here.
aseprohman commented 2 years ago

thanks @besmaGuesmi, I will try it.

have you ever tried to compare performance of model inference between YOLOv5 in IR model while executed by the openvino framework and the yolov5 executed by the pytorch framework ? Openvino have some optimization tools for speed up model performance like Post-training Optimization Tool (POT) and Neural Network Compression Framework (NNCF). have you ever tried it ?

Guemann-ui commented 2 years ago

HI @aseprohman, yes, of course, I compared the inference time as well as Throughput of the model (FPS), you can use DL benchmark to convert the model from FP16 to INT8 but unfortunately, you couldn't use the INT8 model with VPU device, otherwise, I highly recommend to use MYRIADX which provide me 10 times much better result than CPU: https://www.intel.com/content/www/us/en/products/details/processors/movidius-vpu/movidius-myriad-x.html

Good luck.

aseprohman commented 2 years ago

thanks a lot @besmaGuesmi for your explanation maybe I'll ask you again next time after doing an experiment :) regards

aseprohman commented 2 years ago

this message appears when I try to export pt model to onnx model

TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if self.grid[i].shape[2:4] != x[i].shape[2:4] or self.onnx_dynamic:

I do export command like this

python3 export.py --weights weights/yolov5m.pt --include onnx --device cpu --batch-size 1 --img 1024

I do training with --img 416 but I want to run inference with 1024 cause I have better accuracy result when run inference in bigger size. am I wrong ?

aseprohman commented 2 years ago

@glenn-jocher or @besmaGuesmi, do you have any suggestion ?

glenn-jocher commented 2 years ago

@aseprohman you can ignore warnings

Guemann-ui commented 2 years ago

Hi @aseprohman, sorry for the later reply! first I don't agree with you to increase the image size in inference! why did you use Openvino? to speed up the inference right? so when the image size increases the inference time increase (read this: https://www.researchgate.net/figure/The-impact-of-image-size-on-the-inference-speed-on-an-edge-device_fig9_323867606) In addition to obtaining a good bounding box size I highly recommend to use the same image size of the model (for Yolov5m model use image size = 640) for training. here is what exactly you have to do after training the model with 640 image size (Yolov5m): $python export.py –weights model.pt –img 640 –batch 1 Looking forward to your result. Good luck

Guemann-ui commented 2 years ago

Another modification maybe you have to do in export.py model.model[-1].export = True When set to True, Detect layers (including nms, anchor frame calculations, etc.) are not output to the model. Models set to False with Detect layer cannot be converted from onnx to OpenVINO format models.

zzff-sys commented 2 years ago

Hello! I use the command line to convert ONNX files to IR format: python mo_onnx.py --input_model=yolov5s.onnx --model_name=test -s 255 --reverse_input_channels --output Conv_416,Conv_482,Conv_350 --data_type=FP16 (Layer name already viewed with Netron),But the following error is returned:[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.user_data_repack.UserDataRepack'>): No node with name Conv_416 _74Z0 KI8Z`6VWTK(DA{XA How can I solve this problem, thank you very @besmaGuesmi @glenn-jocher

zzff-sys commented 2 years ago

I use yolov5s.pt to yolov5s.onnx,the picture yolov5s.onnx show: yolov5s onnx

glenn-jocher commented 2 years ago

@zzff-sys we don't have an openvino export workflow so I can't provide support there.

The ONNX model you have is correct, it's one of our supported export workflows.

What's an example of a correctly working openvino export workflow?

glenn-jocher commented 2 years ago

@zzff-sys FYI the output of a YOLOv5 ONNX model is just 'output'. I don't know where you got your output values from in your command, but that can't be right.

Guemann-ui commented 2 years ago

Hi @zzff-sys ! in your command, you have to put the last three CONV names as shown in the screenshot below, image

glenn-jocher commented 2 years ago

@besmaGuesmi if you experience with openvino exports do you think you could create a PR to add this format to export.py? That would be useful helping future users like @zzff-sys with this in the future. Thanks!

Please see our ✅ Contributing Guide to get started.

zzff-sys commented 2 years ago

Thank you very much. According to your tips, I have solved the problem. @glenn-jocher @besmaGuesmi

Guemann-ui commented 2 years ago

Hi @glenn-jocher, Thank you for the suggestion. I will check it out and back to you. Thanks

glenn-jocher commented 2 years ago

@besmaGuesmi @zzff-sys @aseprohman I've created a PR for YOLOv5 OpenVINO export support in https://github.com/ultralytics/yolov5/issues/6057.

This isn't yet working though, I get a non-zero exit code on the export command. Do you know what the problem might be? Can you help me debug this? Thanks!!

!git clone https://github.com/ultralytics/yolov5 -b export/openvino  # clone
%cd yolov5
%pip install -qr requirements.txt onnx openvino-dev  # install INCLUDING onnx and openvino-dev

import torch
from yolov5 import utils
display = utils.notebook_init()  # checks

# Export OpenVINO
!python export.py --include openvino
Screen Shot 2021-12-21 at 4 46 03 PM
glenn-jocher commented 2 years ago

@besmaGuesmi @zzff-sys @aseprohman problem was that OpenVINO export seems to require ONNX Opset <= 12. I've enforced this constraint now and everything seems to be working well :)

EDIT: converted to directory output since OpenVINO creates 3 files. Export directory is i.e. yolov5s_openvino_model/:

Screen Shot 2021-12-21 at 5 23 59 PM
glenn-jocher commented 2 years ago

@besmaGuesmi @zzff-sys @aseprohman good news 😃! Your original issue may now be fixed ✅ in PR #6057. This PR adds native YOLOv5 OpenVINO export:

python export.py --include openvino

image

To receive this update:

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

zzff-sys commented 2 years ago

Great work! Thank you! I will try again @glenn-jocher

glenn-jocher commented 2 years ago

@besmaGuesmi do you think you could help us with OpenVINO inference now that export is complete? We need to add OpenVino fields to DetectMultBackend() for this purpose. I've never used OpenVINO though so I don't have a good inference example to start from:

https://github.com/ultralytics/yolov5/blob/db6ec66a602a0b64a7db1711acd064eda5daf2b3/models/common.py#L277-L437

glenn-jocher commented 2 years ago

@besmaGuesmi good news 😃! Your original issue may now be fixed ✅ in PR #6179. This PR brings native OpenVINO export and inference:

!python export.py --weights yolov5s.pt --include openvino  # export
!python detect.py --weights yolov5s_openvino_model/yolov5s.xml  # inference
!python val.py --weights yolov5s_openvino_model/yolov5s.xml --data ...  # validation

To receive this update:

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

13265170340 commented 2 years ago

@besmaGuesmi image Hi, have you ever encountered this situation?

taloot commented 2 years ago

@aseprohman what is an IR model?

@aseprohman what is an IR model?

@glenn-jocher IR is short from Intermediate Representative model that is the OpenVino model format for inferencing model into targetted devices

He knows but this guy never helped anyone

arduinitavares commented 2 years ago

Hi, guys,

it seems that now we can export it directly to openvino format via the following command line: $ python path/to/export.py --weights yolov5s.pt --include openvino

I still can't get the results from the bounding box tho. Instead, I am getting this:

<InferRequest: inputs[ <ConstOutput: names[images] shape{1,3,640,640} type: f32> ] outputs[ <ConstOutput: names[output] shape{1,25200,7} type: f32>, <ConstOutput: names[onnx::Sigmoid_446] shape{1,3,80,80,7} type: f32>, <ConstOutput: names[onnx::Sigmoid_498] shape{1,3,40,40,7} type: f32>, <ConstOutput: names[onnx::Sigmoid_550] shape{1,3,20,20,7} type: f32> ]>

glenn-jocher commented 2 years ago

@arduinitavares OpenVINO Usage examples are clearly displayed after export:

Screenshot 2022-05-17 at 14 18 27
arduinitavares commented 2 years ago

@arduinitavares OpenVINO Usage examples are clearly displayed after export: Screenshot 2022-05-17 at 14 18 27

Thanks for the quick answer. I am not using the "detect.py" script tho. I can't use it in my application. I have my own app / script running it. I need to convert it to the openvino format and use it like that:

That's part of the script:

ie = Core()
xml_path = 'C:/Users/atavares/projects/scrap_bucket/yolov5/runs/train/yolov5s24/weights/best_openvino_model/best.xml'
model = ie.read_model(model=xml_path)
compiled_model = ie.compile_model(model=model, device_name='CPU')
input_layer_ir = next(iter(compiled_model.inputs))
N, C, H, W = input_layer_ir.shape
# Create inference request
request = compiled_model.create_infer_request()
request.infer({input_layer_ir.any_name: img})
plmmyyds commented 2 years ago

Fusing layers... Model Summary: 213 layers, 7225885 parameters, 0 gradients

PyTorch: starting from laotie.pt with output shape (1, 25200, 85) (14.8 MB)

ONNX: starting export with onnx 1.11.0... ONNX: export success, saved as laotie.onnx (29.3 MB)

OpenVINO: starting export with openvino 2.1.2020.4.0-359-21e092122f4-releases/2020/4... Traceback (most recent call last): File "/home/hl-yys/.local/bin/mo", line 5, in from mo.main import main ModuleNotFoundError: No module named 'mo.main'

OpenVINO: export failure: Command 'mo --input_model laotie.onnx --output_dir laotie_openvino_model/' returned non-zero exit status 1.

plmmyyds commented 2 years ago

Fusing layers... Model Summary: 213 layers, 7225885 parameters, 0 gradients

PyTorch: starting from laotie.pt with output shape (1, 25200, 85) (14.8 MB)

ONNX: starting export with onnx 1.11.0... ONNX: export success, saved as laotie.onnx (29.3 MB)

OpenVINO: starting export with openvino 2.1.2020.4.0-359-21e092122f4-releases/2020/4... Traceback (most recent call last): File "/home/hl-yys/.local/bin/mo", line 5, in from mo.main import main ModuleNotFoundError: No module named 'mo.main'

OpenVINO: export failure: Command 'mo --input_model laotie.onnx --output_dir laotie_openvino_model/' returned non-zero exit status 1.

glenn-jocher commented 2 years ago

@plmmyyds it appears you may have environment problems. Please ensure you meet all dependency requirements if you are attempting to run YOLOv5 locally. If in doubt, create a new virtual Python 3.9 environment, clone the latest repo (code changes daily), and pip install requirements.txt again from scratch.

💡 ProTip! Try one of our verified environments below if you are having trouble with your local environment.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Models and datasets download automatically from the latest YOLOv5 release when first requested.

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

WorstCodeWay commented 1 year ago

@Guemann-ui Hi, I have problem when converting .pt to openvino format. Here is my log:

(yolov5) E:\yolov5>python export.py --weights weights_pre/yolov5n-seg.pt --device 0 --include openvino
export: data=E:\yolov5\data\coco128.yaml, weights=['weights_pre/yolov5n-seg.pt'], imgsz=[640, 640], batch_size=1, device=0, half=False, inplace=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=17, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['openvino']
YOLOv5  v7.0-173-g5733342 Python-3.10.11 torch-2.0.1 CUDA:0 (NVIDIA GeForce RTX 2070 with Max-Q Design, 8192MiB)

Fusing layers...
YOLOv5n-seg summary: 224 layers, 1986637 parameters, 0 gradients, 7.1 GFLOPs

PyTorch: starting from weights_pre\yolov5n-seg.pt with output shape (1, 25200, 117) (4.1 MB)

ONNX: starting export with onnx 1.14.0...
================ Diagnostic Run torch.onnx.export version 2.0.1 ================
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

ONNX: export success  0.8s, saved as weights_pre\yolov5n-seg.onnx (8.0 MB)

OpenVINO: starting export with openvino 2023.0.0-10926-b4452d56304-releases/2023/0...
usage: main.py [options]
main.py: error: unrecognized arguments: --data_type FP32
OpenVINO: export failure  2.2s: Command '['mo', '--input_model', 'weights_pre\\yolov5n-seg.onnx', '--output_dir', 'weights_pre\\yolov5n-seg_openvino_model\\', '--data_type', 'FP32']' returned non-zero exit status 2.

Export complete (9.0s)
Results saved to E:\yolov5\weights_pre
Detect:          python segment\predict.py --weights weights_pre\yolov5n-seg.onnx
Validate:        python segment\val.py --weights weights_pre\yolov5n-seg.onnx
PyTorch Hub:     model = torch.hub.load('ultralytics/yolov5', 'custom', 'weights_pre\yolov5n-seg.onnx')  # WARNING  SegmentationModel not yet supported for PyTorch Hub AutoShape inference
Visualize:       https://netron.app

I confused by error: unrecognized arguments: --data_type FP32, what's wrong? Besides, my pre-trained weights is downloaded from [yolov5n-seg.pt](https://github.com/ultralytics/yolov5/releases/v7.0#New Segmentation Checkpoints)

glenn-jocher commented 1 year ago

Hello @WorstCodeWay-T, it appears that --data_type FP32 is not a recognized command argument. You can try bypassing this error by removing the --data_type argument from the command line. This should allow the export to proceed without error.

Also, be aware that PyTorch 2.0.1 doesn't officially support ONNX 1.14 and OpenVINO 2023.0.0 yet. It's recommend to use an earlier version to make sure this error is not caused by unsupported version combinations. For example, PyTorch 1.9.0, ONNX 1.8.1, and OpenVINO 2021.3 should work together properly.

WorstCodeWay commented 1 year ago

Hello @WorstCodeWay-T, it appears that --data_type FP32 is not a recognized command argument. You can try bypassing this error by removing the --data_type argument from the command line. This should allow the export to proceed without error.

Also, be aware that PyTorch 2.0.1 doesn't officially support ONNX 1.14 and OpenVINO 2023.0.0 yet. It's recommend to use an earlier version to make sure this error is not caused by unsupported version combinations. For example, PyTorch 1.9.0, ONNX 1.8.1, and OpenVINO 2021.3 should work together properly.

@glenn-jocher Thanks for quick reply. Now I‘m realized. And I will try mentioned PyTorch and OpenVINO.

Hello @WorstCodeWay-T, it appears that --data_type FP32 is not a recognized command argument. You can try bypassing this error by removing the --data_type argument from the command line. This should allow the export to proceed without error.

Also, be aware that PyTorch 2.0.1 doesn't officially support ONNX 1.14 and OpenVINO 2023.0.0 yet. It's recommend to use an earlier version to make sure this error is not caused by unsupported version combinations. For example, PyTorch 1.9.0, ONNX 1.8.1, and OpenVINO 2021.3 should work together properly.

@glenn-jocher Thanks! I’ll try

glenn-jocher commented 1 year ago

@WorstCodeWay your suggestions and see if that resolves the issue. I appreciate your help in troubleshooting this error.

Guemann-ui commented 1 year ago

Hi @WorstCodeWay

Sorry for the late reply! did you solve it?

glenn-jocher commented 1 year ago

Hello @Guemann-ui,

I noticed that you asked @WorstCodeWay if they were able to solve their issue with the YOLOv5 conversion to OpenVINO. If they haven't replied yet, I suggest following up with them to see if they have made any progress.

If you have a similar issue, feel free to share your problem and any error messages you receive. We'll do our best to assist you in resolving it.

Best regards.

yao-xiaofei commented 1 year ago

Hello @WorstCodeWay-T, it appears that I encountered the same problem (openvino 2023.0.1), after removing the parameter '- data'_ Type FP32 ,it can be successfully exported is not a recognized command argument. You can try bypassing this error by removing the--data_type` argument from the command line. This should allow the export to proceed without error.

Also, be aware that PyTorch 2.0.1 doesn't officially support ONNX 1.14 and OpenVINO 2023.0.0 yet. It's recommend to use an earlier version to make sure this error is not caused by unsupported version combinations. For example, PyTorch 1.9.0, ONNX 1.8.1, and OpenVINO 2021.3 should work together properly.

Hi @glenn-jocher @Guemann-ui

I encountered the same problem (openvino 2023.0.1), after removing the parameter '- data'_ Type FP32 `, it can be successfully exported

glenn-jocher commented 1 year ago

Hi @yao-xiaofei,

I encountered the same problem with OpenVINO 2023.0.1. However, after removing the --data_type FP32 parameter from the command line, the export process was successful.

Thank you for providing the solution.