wmcnally / kapao

KAPAO is an efficient single-stage human pose estimation model that detects keypoints and poses as objects and fuses the detections to predict human poses.
GNU General Public License v3.0
744 stars 103 forks source link

how to export to onnx? #71

Open xinsuinizhuan opened 1 year ago

ykk648 commented 1 year ago

I have converted the model to onnx success: torch.onnx.export(model, img, './test.onnx', verbose=True, opset_version=opset_version, input_names=input_names, output_names=output_names, dynamic_axes=dynamic_axes)

xddlj commented 1 year ago

I have converted the model to onnx success: torch.onnx.export(model, img, './test.onnx', verbose=True, opset_version=opset_version, input_names=input_names, output_names=output_names, dynamic_axes=dynamic_axes)

When I export pt to onnx, has this error, Can you tell me how you converted the model to onnx success. RuntimeError: Exporting the operator silu to ONNX opset version 11 is not supported. Please open a bug to request ONNX export support for the missing operator.

ykk648 commented 1 year ago

@xddlj try opset_version = 13

PaulX1029 commented 1 year ago

I have converted the model to onnx success: torch.onnx.export(model, img, './test.onnx', verbose=True, opset_version=opset_version, input_names=input_names, output_names=output_names, dynamic_axes=dynamic_axes)

请问函数里的参数应该怎么写呢?官方给的pt不知道输入输出shape和名字该怎么转

ykk648 commented 1 year ago
def torch2onnx(model_, input_, output_name="./test.onnx"):
    input_names = ["input_1"]
    output_names = ["output_1"]
    opset_version = 13
    dynamic_axes = None
    # dynamic_axes = {'input_1': [0, 2, 3], 'output_1': [0, 1]}
    torch.onnx.export(model_, input_, output_name, verbose=True, opset_version=opset_version,
                      input_names=input_names, output_names=output_names,
                      dynamic_axes=dynamic_axes, do_constant_folding=True)
    raise 'convert done !'

@PaulX1029

PaulX1029 commented 1 year ago

转换的官方的kapao_s_coco.pt吗,我按照您的代码,转换提示这个错误: Traceback (most recent call last): File "/mnt/sda/AI/kapao-master/export_xzw.py", line 18, in <module> torch2onnx(model_path, img, output_name) File "/mnt/sda/AI/kapao-master/export_xzw.py", line 11, in torch2onnx dynamic_axes=dynamic_axes, do_constant_folding=True) File "/mnt/sda/AI/miniconda3/envs/yolov5/lib/python3.7/site-packages/torch/onnx/__init__.py", line 276, in export custom_opsets, enable_onnx_checker, use_external_data_format) File "/mnt/sda/AI/miniconda3/envs/yolov5/lib/python3.7/site-packages/torch/onnx/utils.py", line 94, in export use_external_data_format=use_external_data_format) File "/mnt/sda/AI/miniconda3/envs/yolov5/lib/python3.7/site-packages/torch/onnx/utils.py", line 676, in _export with select_model_mode_for_export(model, training): File "/mnt/sda/AI/miniconda3/envs/yolov5/lib/python3.7/contextlib.py", line 112, in __enter__ return next(self.gen) File "/mnt/sda/AI/miniconda3/envs/yolov5/lib/python3.7/site-packages/torch/onnx/utils.py", line 38, in select_model_mode_for_export is_originally_training = model.training AttributeError: 'str' object has no attribute 'training'

PaulX1029 commented 1 year ago

image @ykk648

PaulX1029 commented 1 year ago

对不起,我误会了您的意思,需要用torch框架先把模型加载进来吧?

ykk648 commented 1 year ago

@PaulX1029

xinsuinizhuan commented 1 year ago

最好官方能出个export.py的脚本

nikhilchh commented 1 year ago

I converted the model to ONNX with following options:

im = torch.randn(1, 3, 640, 640).type_as(next(model.parameters()))

torch.onnx.export(
        model.cpu(),
        im.cpu(),
        "kapao.onnx",
        verbose=False,
        opset_version=12,
        do_constant_folding=True,  
        input_names=['images'],
        output_names=['output'],
        dynamic_axes=None)

Conversion seems to be successful. But when i load the model for inference using onnxruntime i get error:

session = ort.InferenceSession(model_path)

Error:

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from kapao.onnx failed:Node (Mul_2329) Op (Mul) [ShapeInferenceError] Incompatible dimensions

Was someone able to do inference using onnx runtime ?

ykk648 commented 1 year ago

https://github.com/ykk648/AI_power/blob/main/body_lib/body_kp_detector/body_kp_detector_kapao/body_kp_detector_kapao.py

nikhilchh commented 10 months ago

@ykk648

Going through your dependencies to find where exactly you do "onnxruntime.InferenceSession(model_path)"

But I could not find where is the code for ModelBase:

'from ...model_base import ModelBase'

nikhilchh commented 10 months ago

I found some changes that were done to yolov5 github to handle this issue:

https://github.com/ultralytics/yolov5/pull/2982

I guess this is what is the issue during the inference.

ykk648 commented 10 months ago

@nikhilchh https://github.com/ykk648/apstone/blob/main/apstone/wrappers/onnx_wrapper/onnx_model.py https://github.com/ykk648/apstone/blob/main/apstone/model_base.py