Open SwEngine opened 1 year ago
I also tried with CMAKE, but got same errors.
When I was compiling, I also encountered the same error, did you solve this problem?
When I was compiling, I also encountered the same error, did you solve this problem?
I have solved this problem by using onnxruntime version 1.8.1 and compiling it successfully
When I was compiling, I also encountered the same error, did you solve this problem?
I have solved this problem by using onnxruntime version 1.8.1 and compiling it successfully
I have changed my onnxruntime version into 1.8.1 but still have problems in error: no matching function for call to ‘Ort::Session::Session(Ort::Env&, const wchar_t*, Ort::SessionOptions&)’ 76 | ort_session = new Session(env, widestr.c_str(), sessionOptions);
When I was compiling, I also encountered the same error, did you solve this problem?
I have solved this problem by using onnxruntime version 1.8.1 and compiling it successfully
I have changed my onnxruntime version into 1.8.1 but still have problems in error: no matching function for call to ‘Ort::Session::Session(Ort::Env&, const wchar_t*, Ort::SessionOptions&)’ 76 | ort_session = new Session(env, widestr.c_str(), sessionOptions);
string model_path = config.modelpath; // std::wstring widestr = std::wstring(model_path.begin(), model_path.end()); // OrtStatus* status = OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, 0); sessionOptions.SetGraphOptimizationLevel(ORT_ENABLE_BASIC); ort_session = new Session(env, model_path.c_str(), sessionOptions);
When you change widestr to model_path,and then delete std::wstring widestr = std::wstring(model_path.begin(), model_path.end());,finally you
can compile it
When you change widestr to model_path,and then delete
std::wstring widestr = std::wstring(model_path.begin(), model_path.end());,finally you
can compile it
I modify my code like this string model_path = config.modelpath; //std::wstring widestr = std::wstring(model_path.begin(), model_path.end()); //OrtStatus* status = OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, 0); sessionOptions.SetGraphOptimizationLevel(ORT_ENABLE_BASIC); ort_session = new Session(env, model_path.c_str(), sessionOptions); an new erroe occur : corrupted size vs. prev_size 已中止 (核心已转储)
onnxruntime用了1.8.1版本后,我就改了这一个地方,其余地方都没改。我的编译脚本是g++ main.cpp -o demo.out -lonnxruntime \ -I/media/nie/D/soft/onnxruntime-linux-x64-1.8.1/include \ -L/media/nie/D/soft/onnxruntime-linux-x64-1.8.1/lib \
pkg-config --cflags --libs opencv4``
YOLOV7_face::YOLOV7_face(Net_config config) { this->confThreshold = config.confThreshold; this->nmsThreshold = config.nmsThreshold;
string model_path = config.modelpath;
// std::wstring widestr = std::wstring(model_path.begin(), model_path.end());
//OrtStatus* status = OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, 0);
sessionOptions.SetGraphOptimizationLevel(ORT_ENABLE_BASIC);
ort_session = new Session(env, model_path.c_str(), sessionOptions);
size_t numInputNodes = ort_session->GetInputCount();
size_t numOutputNodes = ort_session->GetOutputCount();
AllocatorWithDefaultOptions allocator;
for (int i = 0; i < numInputNodes; i++)
{
input_names.push_back(ort_session->GetInputName(i, allocator));
Ort::TypeInfo input_type_info = ort_session->GetInputTypeInfo(i);
auto input_tensor_info = input_type_info.GetTensorTypeAndShapeInfo();
auto input_dims = input_tensor_info.GetShape();
input_node_dims.push_back(input_dims);
}
for (int i = 0; i < numOutputNodes; i++)
{
output_names.push_back(ort_session->GetOutputName(i, allocator));
Ort::TypeInfo output_type_info = ort_session->GetOutputTypeInfo(i);
auto output_tensor_info = output_type_info.GetTensorTypeAndShapeInfo();
auto output_dims = output_tensor_info.GetShape();
output_node_dims.push_back(output_dims);
}
this->inpHeight = input_node_dims[0][2];
this->inpWidth = input_node_dims[0][3];
}
YOLOV7_face::YOLOV7_face(Net_config config) { this->confThreshold = config.confThreshold; this->nmsThreshold = config.nmsThreshold;
string model_path = config.modelpath; // std::wstring widestr = std::wstring(model_path.begin(), model_path.end()); //OrtStatus* status = OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, 0); sessionOptions.SetGraphOptimizationLevel(ORT_ENABLE_BASIC); ort_session = new Session(env, model_path.c_str(), sessionOptions); size_t numInputNodes = ort_session->GetInputCount(); size_t numOutputNodes = ort_session->GetOutputCount(); AllocatorWithDefaultOptions allocator; for (int i = 0; i < numInputNodes; i++) { input_names.push_back(ort_session->GetInputName(i, allocator)); Ort::TypeInfo input_type_info = ort_session->GetInputTypeInfo(i); auto input_tensor_info = input_type_info.GetTensorTypeAndShapeInfo(); auto input_dims = input_tensor_info.GetShape(); input_node_dims.push_back(input_dims); } for (int i = 0; i < numOutputNodes; i++) { output_names.push_back(ort_session->GetOutputName(i, allocator)); Ort::TypeInfo output_type_info = ort_session->GetOutputTypeInfo(i); auto output_tensor_info = output_type_info.GetTensorTypeAndShapeInfo(); auto output_dims = output_tensor_info.GetShape(); output_node_dims.push_back(output_dims); } this->inpHeight = input_node_dims[0][2]; this->inpWidth = input_node_dims[0][3];
}
非常感谢!已经能跑通了但用我训练导出的onnx模型会报onnx版本的问题,我想问问这是官方export.py的问题吗?
When I run the main.cpp in Colab with "!g++ /content/yolov7-detect-face-onnxrun-cpp-py/main.cpp -o cv -I/usr/include/opencv4 -I/usr/local/include/onnxruntime/", I got the errors below. What can be the problem and the solution?