openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
7.38k stars 2.31k forks source link

[Bug]: The RTMPose model didn't works well with GPU device #21165

Closed campos537 closed 1 year ago

campos537 commented 1 year ago

OpenVINO Version

2023.2

Operating System

Other (Please specify in description)

Device used for inference

GPU

Framework

ONNX

Model used

https://github.com/open-mmlab/mmpose/blob/main/projects/rtmpose/README.md

Issue description

The OS Used is Ubuntu 22.04

I've been experimenting with the model RTMPose with different OpenVINO since the 2022.2 until the 2023.2.

From the versions before the 2023.2 the model runs but slower than the inference in CPU, even so the model is running the keypoints does not correspond as the correct ones (If I change the Device to CPU the Keypoints shown are the correct ones). Within the version 2023.2 the error in the "Relevant log output" appears.

Step-by-step reproduction

You can download the exacrtly ONNX file in this link

Then you just need to run it in the GPU to see the log, and need to implement a decoder in order to see the keypoints output

Relevant log output

Traceback (most recent call last):
  File "/home/crystal/Repositorios/cv_analytics_pipeline/env/lib/python3.9/site-packages/cvap/inference/predictor.py", line 186, in get_predictor
    return OpenvinoPredictor(config)
  File "/home/crystal/Repositorios/cv_analytics_pipeline/env/lib/python3.9/site-packages/cvap/inference/openvino_predictor.py", line 51, in __init__
    self.compiled_net = self.ie.compile_model(self.net, target_inference)
  File "/home/crystal/Repositorios/cv_analytics_pipeline/env/lib/python3.9/site-packages/openvino/runtime/ie_api.py", line 543, in compile_model
    super().compile_model(model, device_name, {} if config is None else config),
RuntimeError: Exception from src/inference/src/core.cpp:113:
[ GENERAL_ERROR ] Check 'TRShape::broadcast_merge_into(output_shape, input_shapes[1], autob)' failed at src/core/shape_inference/include/eltwise_shape_inference.hpp:26:
While validating node 'opset1::Multiply Multiply_17584 (546[0]:f16[256,48], Constant_17612[0]:f16[1,256]) -> (f16[48,256])' with friendly_name 'Multiply_17584':
Argument shapes are inconsistent.

Issue submission checklist

rkazants commented 1 year ago

@p-durandin, @vladimir-paramuzov, please help with it.

Wan-Intel commented 1 year ago

I replicated the issue with Benchmark Python Tool from OpenVINO™ Development Tools 2023.2.0.

I encountered the same error as campos537 when inferencing the ONNX model with GPU plugin, while inferencing the ONNX model with CPU plugin works fine:

ONNX model with CPU plugin: onnx cpu ok

ONNX model with GPU plugin: onnx gpu fail

After converting the ONNX model into Intermediate Representation (IR), Benchmark Python Tool worked successfully when inferencing the IR with CPU and GPU plugin:

IR with CPU plugin: IR cpu ok

IR with GPU plugin: IR gpu ok

Wan-Intel commented 1 year ago

Closing issue, feel free to re-open or start a new issue if additional assistance is needed.