Closed campos537 closed 1 year ago
@p-durandin, @vladimir-paramuzov, please help with it.
I replicated the issue with Benchmark Python Tool from OpenVINO™ Development Tools 2023.2.0.
I encountered the same error as campos537 when inferencing the ONNX model with GPU plugin, while inferencing the ONNX model with CPU plugin works fine:
ONNX model with CPU plugin:
ONNX model with GPU plugin:
After converting the ONNX model into Intermediate Representation (IR), Benchmark Python Tool worked successfully when inferencing the IR with CPU and GPU plugin:
IR with CPU plugin:
IR with GPU plugin:
Closing issue, feel free to re-open or start a new issue if additional assistance is needed.
OpenVINO Version
2023.2
Operating System
Other (Please specify in description)
Device used for inference
GPU
Framework
ONNX
Model used
https://github.com/open-mmlab/mmpose/blob/main/projects/rtmpose/README.md
Issue description
The OS Used is Ubuntu 22.04
I've been experimenting with the model RTMPose with different OpenVINO since the 2022.2 until the 2023.2.
From the versions before the 2023.2 the model runs but slower than the inference in CPU, even so the model is running the keypoints does not correspond as the correct ones (If I change the Device to CPU the Keypoints shown are the correct ones). Within the version 2023.2 the error in the "Relevant log output" appears.
Step-by-step reproduction
You can download the exacrtly ONNX file in this link
Then you just need to run it in the GPU to see the log, and need to implement a decoder in order to see the keypoints output
Relevant log output
Issue submission checklist