Open SongYii opened 1 year ago
It seems your compiler by default enforces -D_GLIBCXX_ASSERTIONS
which turns on assertions in some C++11 constructs. However I can't reproduce with -D_GLIBCXX_ASSERTIONS
on my machine.
You may use gdb
and step check to find the route cause of this error or you can try to build mmdeploy with -DCMAKE_CXX_FLAGS=-U_GLIBCXX_ASSERTIONS
Could you solve the problem? I'm facing the same issue. Would appreciate any help a lot.
@NicolasPetermann134
Hi, are you using onnxruntime backend? Can you inference the model with pure onnxruntime api?
According to https://github.com/open-mmlab/mmdeploy/issues/2191, I guess the error may due to loading onnxruntime custom operator library. As I cannot be reproduced on my machine, could you please verify it?
the input/output name and shape may be different according to your onnx model, you can check it by netron. the custom onnxruntime operator library can be bound in your python package installation path.
with pure onnxruntime api
import onnxruntime as ort
import numpy as np
sess = ort.InferenceSession('/path/to/onnx', None, ['CPUExecutionProvider'])
input = np.random.randn(1, 3, 224, 224).astype(np.float32) # b, c, h, w
output = sess.run(['output'], input_feed={'input': input})
onnxruntime api with custom operator library
import onnxruntime as ort
import numpy as np
session_options = ort.SessionOptions()
session_options.register_custom_ops_library('/path/to/libmmdeploy_ort_net.so') # if you face the problem when convert the model, the lib should be libmmdeploy_onnxruntime_ops.so
sess = ort.InferenceSession('/path/to/onnx', session_options, ['CPUExecutionProvider'])
input = np.random.randn(1, 3, 224, 224).astype(np.float32) # b, c, h, w
output = sess.run(['output'], input_feed={'input': input})
Hi Chen Xin,
Many thx for your reply. I can verify that I can run both models mmdet and mmpose with pure onnxruntime api!
But I got indeed a loading error with custom operator library:
Traceback (most recent call last):
File "/home/nico/PycharmProjects/deepGym/mmdeploy_old/pure_ort_api.py", line 14, in
But the file exists.
Any idea how I could fix it within your sdk?
Just saw that you recommend to install onnxruntime-1.8.1 in #2191 . I'm using Version: 1.15.1. Unforutnately, I cannot install 1.8.1:
pip install onnxruntime==1.8.1
ERROR: Could not find a version that satisfies the requirement onnxruntime==1.8.1 (from versions: 1.12.0, 1.12.1, 1.13.1, 1.14.0, 1.14.1, 1.15.0, 1.15.1) ERROR: No matching distribution found for onnxruntime==1.8.1
@NicolasPetermann134
https://pypi.org/project/onnxruntime/1.8.1/#files
The onnxruntime 1.8.1 only support python in [3.6, 3.9]
I'm facing the same issue. Would appreciate any help a lot.
Are you using c sdk? What device are you passing?
Oh ok, will downgrade to python 3.9 then.
No, I'm using python sdk. What do you mean by what device passing? I'm following the guide from the RTMPose site: https://github.com/open-mmlab/mmpose/tree/1.x/projects/rtmpose#%EF%B8%8F-how-to-deploy-
FYI: I still get this error even if I run the sdk with python 3.8 and onnxruntime 1.8.1
@NicolasPetermann134
Sorry for late reply, could you print LD_LIBRARY_PATH
environment variable ?
@irexyc
echo $LD_LIBRARY_PATH /home/nico/PycharmProjects/deepGym/mmdeploy/onnxruntime-linux-x64-1.8.1/lib:/home/nico/PycharmProjects/deepGym/mmdeploy/onnxruntime-linux-x64-1.8.1/lib:/home/nico/PycharmProjects/deepGym/mmdeploy/onnxruntime-linux-x64-gpu-1.8.1/lib:/home/nico/PycharmProjects/deepGym/mmdeploy/onnxruntime-linux-x64-gpu-1.8.1/lib:/home/nico/PycharmProjects/deepGym/mmdeploy/onnxruntime-linux-x64-gpu-1.8.1/lib:/home/nico/PycharmProjects/deepGym/mmdeploy/onnxruntime-linux-x64-1.8.1/lib:/home/nico/PycharmProjects/deepGym/mmdeploy/cuda/lib64:/home/nico/PycharmProjects/deepGym/mmdeploy/TensorRT-8.2.3.0/lib:
@irexyc Does that tell you anything? Should it look different?
@NicolasPetermann134 Sorry for late reply. The path looks ok.
In your previous reply you said
I can verify that I can run both models mmdet and mmpose with pure onnxruntime api!
Have you tried if this code work?
import onnxruntime as ort import numpy as np session_options = ort.SessionOptions() session_options.register_custom_ops_library('/path/to/libmmdeploy_ort_net.so') # if you face the problem when convert the model, the lib should be libmmdeploy_onnxruntime_ops.so sess = ort.InferenceSession('/path/to/onnx', session_options, ['CPUExecutionProvider']) input = np.random.randn(1, 3, 224, 224).astype(np.float32) # b, c, h, w output = sess.run(['output'], input_feed={'input': input})
If the above code doesn't work, the python sdk will not work either and the problem shoud be loading onnxruntime custom ops library. Then we can try to build custom ops library with newer onnxruntime library.
If the above code works, there should be problem with other part in sdk.
@irexyc
Yes, I've tried that code and it doesn't work. I've got this error:
Traceback (most recent call last): File "/home/nico/PycharmProjects/deepGym/mmdeploy_old/pure_ort_api.py", line 14, in session_options.register_custom_ops_library(path_custom_ops) # if you face the problem when convert the model, the lib should be libmmdeploy_onnxruntime_ops.so onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Failed to load library /home/nico/miniconda3/envs/openmmlab/lib/python3.10/site-packages/mmdeploy_runtime/libmmdeploy_ort_net.so with error: libmmdeploy.so.1: cannot open shared object file: No such file or directory
@NicolasPetermann134 Below is my mmdeploy_runtime installation content, what is yours?
/home/chenxin/miniconda3/envs/torch-1.9.0/lib/python3.8/site-packages/mmdeploy_runtime/
├── __init__.py
├── libmmdeploy_ort_net.so
├── libmmdeploy.so.1
├── libonnxruntime.so.1.8.1
├── mmdeploy_runtime.cpython-38-x86_64-linux-gnu.so
├── __pycache__
├── version.py
└── _win_dll_path.py
@irexyc
Really sry for the late reply I was abroad.
tree /home/nico/miniconda3/envs/mmdeploy/lib/python3.8/site-packages/mmdeploy_runtime /home/nico/miniconda3/envs/mmdeploy/lib/python3.8/site-packages/mmdeploy_runtime ├── init.py ├── libmmdeploy_ort_net.so ├── libmmdeploy.so.1 ├── libonnxruntime.so.1.8.1 ├── mmdeploy_runtime.cpython-38-x86_64-linux-gnu.so ├── pycache │ ├── init.cpython-38.pyc │ ├── version.cpython-38.pyc │ └── _win_dll_path.cpython-38.pyc ├── version.py └── _win_dll_path.py
1 directory, 10 files
Yes, I've tried that code and it doesn't work. I've got this error:
Traceback (most recent call last): File "/home/nico/PycharmProjects/deepGym/mmdeploy_old/pure_ort_api.py", line 14, in session_options.register_custom_ops_library(path_custom_ops) # if you face the problem when convert the model, the lib should be libmmdeploy_onnxruntime_ops.so onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Failed to load library /home/nico/miniconda3/envs/openmmlab/lib/python3.10/site-packages/mmdeploy_runtime/libmmdeploy_ort_net.so with error: libmmdeploy.so.1: cannot open shared object file: No such file or directory
What is the content of pure_ort_api.py
?
tree /home/nico/miniconda3/envs/mmdeploy/lib/python3.8/site-packages/mmdeploy_runtime
You list mmdeploy_runtime content of python3.8. Are you using python 3.8 or 3.10 ?
@irexyc In pure_ort_api.py is your code to do the testing:
import onnxruntime as ort import numpy as np path_custom_ops = "/home/nico/miniconda3/envs/mmdeploy/lib/python3.8/site-packages/mmdeploy_runtime/libmmdeploy_ort_net.so" model_path = 'rtmpose-ort_orig/rtmpose-m/end2end.onnx'
sess = ort.InferenceSession(model_path, None, ['CPUExecutionProvider']) input = np.random.randn(1, 3, 256, 192).astype(np.float32) # b, c, h, w output = sess.run(['simcc_y'], input_feed={'input': input})
session_options = ort.SessionOptions() session_options.register_custom_ops_library(path_custom_ops) # if you face the problem when convert the model, the lib should be libmmdeploy_onnxruntime_ops.so sess = ort.InferenceSession(model_path, session_options, ['CPUExecutionProvider']) input = np.random.randn(1, 3, 256, 192).astype(np.float32) # b, c, h, w output = sess.run(['simcc_y'], input_feed={'input': input})
To your second question: I started with 3.10 but switched to 3.8. The above code is now running, no error anymore with 3.8. But the SDK error still remains:
(mmdeploy) nico@nico-Z690-AORUS-MASTER:~/PycharmProjects/deepGym/mmdeploy$ build/bin/pose_tracker rtmpose-ort_orig/rtmdet-nano/ rtmpose-ort_orig/rtmpose-m/ /home/nico/Downloads/test_video.mp4 --device cpu --det_interval 5
[2023-07-31 21:31:37.306] [mmdeploy] [info] [model.cpp:35] [DirectoryModel] Load model: "rtmpose-ort_orig/rtmpose-m/"
[2023-07-31 21:31:37.306] [mmdeploy] [info] [model.cpp:35] [DirectoryModel] Load model: "rtmpose-ort_orig/rtmdet-nano/"
[2023-07-31 21:31:37.365] [mmdeploy] [info] [inference.cpp:54] ["img"] <- ["data"]
[2023-07-31 21:31:37.365] [mmdeploy] [info] [inference.cpp:65] ["post_output"] -> ["dets"]
[2023-07-31 21:31:37.451] [mmdeploy] [info] [inference.cpp:54] ["img"] <- ["rois"]
[2023-07-31 21:31:37.451] [mmdeploy] [info] [inference.cpp:65] ["post_output"] -> ["keypoints"]
/opt/rh/devtoolset-9/root/usr/include/c++/9/bits/stl_vector.h:1042: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = int; _Alloc = std::allocator
Sorry for late reply.
The above code is now running, no error anymore with 3.8
What the onnxruntime version are you using ? Could you confirm both onnxruntime 1.8.1 and 1.15.1 work?
@irexyc Yes, your testing code runs now without error but the error with SDK still remains.
@NicolasPetermann134 Sorry to bother you again. I'm a little messed up now, I want to make it more clear.
Can pure onnxruntime 1.8.1(python api) (without load libmmdeploy_ort_net.so) could do inference without error ?
Can pure onnxruntime 1.8.1(python api) (with load libmmdeploy_ort_net.so) could do inference without error ?
Can pure onnxruntime 1.15.1(python api) (without load libmmdeploy_ort_net.so) could do inference without error ?
Can pure onnxruntime 1.15.1(python api) (with load libmmdeploy_ort_net.so) could do inference without error ?
Checklist
Describe the bug
c_sdk API Inferences error
Reproduction
./pose_detection cpu ../../mmdeploy_model/hrnet/ ../../demo/resources/human-pose.jpg
Environment
Error traceback