TexasInstruments / edgeai-benchmark

This repository has been moved. The new location is in https://github.com/TexasInstruments/edgeai-tensorlab
https://github.com/TexasInstruments/edgeai
Other
3 stars 0 forks source link

Unsupported model IR version when trying to use prebuilt ONNX model #22

Closed kyflores closed 8 months ago

kyflores commented 8 months ago

Hi,

I'm trying to adapt the tutorial_detection notebook to work a different prebuilt model, specifically edgeai-mmdet/yolox_s_lite_640x640_20220221_model.onnx. I'm on an Ubuntu host and set up the environment with the provided setup_pc.sh script, and both my edgeai-benchmark and edgeai-modelzoo repos are on the r9.1 tag.

First I set session_name = constants.SESSION_NAME_ONNXRT in cell 6, then changed the pipeline_configs to this, just trying to follow the naming:

pipeline_configs = {
    'od-mlpefmnv1': dict(
        task_type='detection',
        calibration_dataset=calib_dataset,
        input_dataset=val_dataset,
        preprocess=preproc_transforms.get_transform_onnx((300,300), (300,300), backend='cv2'),
        session=session_type(**onnx_session_cfg,
            runtime_options=runtime_options,
            #model_path=f'{settings.models_path}/vision/detection/coco/mlperf/ssd_mobilenet_v1_coco_20180128.tflite'),
            model_path=f'{settings.models_path}/vision/detection/coco/edgeai-mmdet/yolox_s_lite_640x640_20220221_model.onnx'),
        postprocess=postproc_transforms.get_transform_detection_mmdet_onnx(),
        metric=dict(label_offset_pred=datasets.coco_det_label_offset_90to90()),
        model_info=dict(metric_reference={'accuracy_ap[.5:.95]%':23.0})
    )
}
print(pipeline_configs)

When I run the cell to compile the model...

interfaces.run_accuracy(settings, work_dir, pipeline_configs)

It fails with this message.

[ONNXRuntimeError] : 1 : FAIL : Load model from /tmp/tmpysvdgjqj/modelartifacts/8bits/od-mlpefmnv1_onnxrt_coco_edgeai-mmdet_yolox_s_lite_640x640_20220221_model_onnx/model/yolox_s_lite_640x640_20220221_model.onnx failed:/home/kumar/work/ort_1.14/onnxruntime/onnxruntime/core/graph/model.cc:145 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 9, max supported IR version: 8

The onnx runtime version installed by the PC setup script is onnxruntime-tidl==1.14.0, but version 1.16.0 is required for IR 9

Is there a 1.16.0 version of onnxruntime-tidl, or another pretrained model file that works IR 8?

mathmanu commented 8 months ago

Did you get the model from here: https://github.com/TexasInstruments/edgeai-modelzoo/blob/main/models/vision/detection/coco/edgeai-mmdet/yolox_s_lite_640x640_20220221_model.onnx.link

I am seeing ONNX v7 in netron.app image

kyflores commented 8 months ago

I thought so, but I just did a fresh checkout of model zoo and I also have the v7 model now. I must have had a stale file lying around somehow, I remember briefly having a newer onnxruntime package installed while troubleshooting. Sorry for the trouble.

AresGod96 commented 5 months ago

Hi @kyflores, I encountered the same issue as you did. How did you manage to solve it? Thank you.

AresGod96 commented 5 months ago

Problem solved. By default, running setup_pc.sh installs the latest version of ONNX 1.15.0, which supports IR 9. To match onnxruntime-tidl==1.14.0 (IR 8), we need to downgrade ONNX version to 1.13.0. Simpy done by pip install onnx==1.13.0.