Closed Jaren-Li closed 2 years ago
hello, can i ask a problem? how to configure the c++ openvino environment? i use clion IDE to develop c++ code.
@funny000 , the easiest way to create a custom code/env based on OpenVINO is by using the official sample application because it already has the required path/dll linked where you don't have to manually set them.
It's recommended to use Visual Studio 2019 as this is officially supported by OpenVINO (you can use the free version).
You may refer here for the full execution steps including images.
@Jaren-Li , for that purpose you may refer to Changes to Inference Pipeline in OpenVINO API v2 as this is the latest official documentation regarding transition to OpenVINO API 2.0
@Jaren-Li , for that purpose you may refer to Changes to Inference Pipeline in OpenVINO API v2 as this is the latest official documentation regarding transition to OpenVINO API 2.0
Ok,thanks,I had solved this problem.But I meet a new problem.My c++ code reasoning is significantly slower than python.Python infer only consume about 5ms.However,c++ need about 25ms. There is no problem with the accuracy of the picture frame, only the speed is very different.
Python:
ie = Core()
net = ie.read_model(args.onnx_path)
test_image = image_preprocess(args.img_path, args.in_shape)
compiled_model = ie.compile_model(net, 'CPU')
time0 = time.time()
output = compiled_model.infer_new_request({0: test_image})
time1 = time.time()
timed = time1 - time0
c++:
Core core;
ov::Shape input_shape = {1, 3, 320, 320};
CompiledModel compiled_model = core.compile_model(model_path, "CPU");
InferRequest infer_request = compiled_model.create_infer_request();
auto input_port = compiled_model.input();
float *aaaaaa;
imagePreprocessing(test, aaaaaa);
Tensor input_tensor(input_port.get_element_type(), input_shape, aaaaaa);
infer_request.set_input_tensor(input_tensor);
long a = cv::getTickCount();
infer_request.infer();
cout <<"time"<< double(cv::getTickCount() - a)/cv::getTickFrequency() << endl;
model input picture
Have you tried to run your model with OpenVINO Benchmark Python Tool? This would show you detailed latency, throughput, etc.
@Jaren-Li , for that purpose you may refer to Changes to Inference Pipeline in OpenVINO API v2 as this is the latest official documentation regarding transition to OpenVINO API 2.0
Ok,thanks,I had solved this problem.But I meet a new problem.My c++ code reasoning is significantly slower than python.Python infer only consume about 5ms.However,c++ need about 25ms. There is no problem with the accuracy of the picture frame, only the speed is very different.
Python:
ie = Core() net = ie.read_model(args.onnx_path) test_image = image_preprocess(args.img_path, args.in_shape) compiled_model = ie.compile_model(net, 'CPU') time0 = time.time() output = compiled_model.infer_new_request({0: test_image}) time1 = time.time() timed = time1 - time0
c++:
Core core; ov::Shape input_shape = {1, 3, 320, 320}; CompiledModel compiled_model = core.compile_model(model_path, "CPU"); InferRequest infer_request = compiled_model.create_infer_request(); auto input_port = compiled_model.input(); float *aaaaaa; imagePreprocessing(test, aaaaaa); Tensor input_tensor(input_port.get_element_type(), input_shape, aaaaaa); infer_request.set_input_tensor(input_tensor); long a = cv::getTickCount(); infer_request.infer(); cout <<"time"<< double(cv::getTickCount() - a)/cv::getTickFrequency() << endl;
model input picture
hey @Jaren-Li , how did you solve the problem that convert opencv::mat to tensor for API 2.0. It has confused me for quite a few days. Would you like to share your related C++ code? Thank you very much.
Mat img; float* input_data = (float*)img.data; ov::Tensor input_tensor(input_port.get_element_type(), input_port.get_shape(),input_data);
or
void imagePreprocessing(Mat img, float* &result){ Mat RGBImg, ResizeImg; cvtColor(img, RGBImg, COLOR_BGR2RGB); cv::resize(RGBImg, ResizeImg, Size(320, 320)); int channels = ResizeImg.channels(), height = ResizeImg.rows, width = ResizeImg.cols; result = (float*)malloc(channels * height * width * sizeof(float)); memset(result, 0, channels * height * width * sizeof(float)); // Convert HWC to CHW and Normalize float mean_rgb[3] = {0.485, 0.456, 0.406}; float std_rgb[3] = {0.229, 0.224, 0.225}; uint8_t* ptMat = ResizeImg.ptr<uint8_t>(0); int area = height * width; for (int c = 0; c < channels; ++c) { for (int h = 0; h < height; ++h) { for (int w = 0; w < width; ++w) { int srcIdx = c * area + h * width + w; int divider = srcIdx / 3; // 0, 1, 2 for (int i = 0; i < 3; ++i) { result[divider + i * area] = static_cast<float>((ptMat[srcIdx] * 1.0f/255.0f - mean_rgb[i]) * 1.0f/std_rgb[i] ); } } } } }
The first way I didn't succeed in reasoning.May be my preprocessing operation is wrong。 The scond way is OK,but so slow. I hope it can help you.
Have you tried to run your model with OpenVINO Benchmark Python Tool? This would show you detailed latency, throughput, etc.
I know what my problem is, I just observed the first run time, when I run multiple times to take the average, everything works fine, thanks.
But I did not find in the preprocessing part that you divide the image by 255 before performing mean on the image. How can I perform this preprocessing operation through openvino2.0?
Often I have "CV" available in my code and do the "normalization" like this:
import cv2
... ... ...
blob_input = cv2.dnn.blobFromImage( pad_img, 1/255, swapRB=True )
Mat img; float* input_data = (float*)img.data; ov::Tensor input_tensor(input_port.get_element_type(), input_port.get_shape(),input_data);
or
void imagePreprocessing(Mat img, float* &result){ Mat RGBImg, ResizeImg; cvtColor(img, RGBImg, COLOR_BGR2RGB); cv::resize(RGBImg, ResizeImg, Size(320, 320)); int channels = ResizeImg.channels(), height = ResizeImg.rows, width = ResizeImg.cols; result = (float*)malloc(channels * height * width * sizeof(float)); memset(result, 0, channels * height * width * sizeof(float)); // Convert HWC to CHW and Normalize float mean_rgb[3] = {0.485, 0.456, 0.406}; float std_rgb[3] = {0.229, 0.224, 0.225}; uint8_t* ptMat = ResizeImg.ptr<uint8_t>(0); int area = height * width; for (int c = 0; c < channels; ++c) { for (int h = 0; h < height; ++h) { for (int w = 0; w < width; ++w) { int srcIdx = c * area + h * width + w; int divider = srcIdx / 3; // 0, 1, 2 for (int i = 0; i < 3; ++i) { result[divider + i * area] = static_cast<float>((ptMat[srcIdx] * 1.0f/255.0f - mean_rgb[i]) * 1.0f/std_rgb[i] ); } } } } }
The first way I didn't succeed in reasoning.May be my preprocessing operation is wrong。 The scond way is OK,but so slow. I hope it can help you.
Thank you very much. I have solved my problem and realized the conversion. here is my code
std::shared_ptr
int height = img.rows;
std::shared_ptr<unsigned char> _data;
size_t size = width * height * img.channels();
_data.reset(new unsigned char[size], std::default_delete<unsigned char[]>());
Mat resized(cv::Size(width, height), img.type(), _data.get());
cv::resize(img, resized, cv::Size(width, height));
return _data;
}
and then:
ov::element::Type input_tensor_type = ov::element::u8; const ov::Layout input_tensor_layout{ "NHWC" }; auto input_data = getData(input_mat); auto input_tensor = ov::Tensor(input_tensor_type, input_tensor_shape, input_data.get()));
The input_tensor layout must be NHWC, because HCHW doesn't work with this getData function. If your net input layout is NCHW, you should convert the input_tensor in preprocess module with:
ov::preprocess::PrePostProcessor ppp(model); ppp.input().tensor().set_layout("NHWC"); ppp.input().model().set_layout("NCHW");
Moreover, preprocess (std, mean etc) in OPENVINO API 2.0 seems better to go through with by this way https://docs.openvino.ai/latest/openvino_2_0_preprocessing.html.
I hope it can help you.
Closing issue, feel free to re-open or start a new issue if additional assistance is needed.
hello,how can I translate cv::mat to ov::tensor in openvino2.0? thanks
Hello, how did you solve the conversion? Can you tell me? Thank you
hello li sorry to reply so late. here is my solution below.
void segmentation_inference(int compiled_model_index, vector<cv::Mat> input_mats, vector<cv::Mat> ret_mats) {
std::shared_ptr<unsigned char> input_data[1024];
ov::element::Type input_tensor_type = ov::element::u8;
const ov::Layout input_tensor_layout{ "NHWC" };
ov::Shape input_tensor_shape = { 1,
(unsigned long long)(*input_mats)[0].rows,
(unsigned long long)(*input_mats)[0].cols,
(unsigned long long)(*input_mats)[0].channels()
};
std::vector<ov::Tensor> input_tensors;
for (int i = 0; i < (*input_mats).size(); ++i) {
input_data[i] = getData((*input_mats)[i]);
input_tensors.push_back(ov::Tensor(input_tensor_type, input_tensor_shape, input_data[i].get()));
}
ov::InferRequest infer_request = compiled_models[compiled_model_index].create_infer_request();
infer_request.set_input_tensors(input_tensors);
infer_request.infer();
const ov::Tensor& output_tensor = infer_request.get_output_tensor();
*ret_mats = getSegRets(output_tensor.data<float>(),
output_tensor.get_shape()[0],
output_tensor.get_shape()[3],
output_tensor.get_shape()[1],
output_tensor.get_shape()[2]);
}
shared_ptr<unsigned char> getData(cv::Mat& img) { int width = img.cols;
int height = img.rows;
if (img.depth() != CV_8U)
return NULL;
std::shared_ptr<unsigned char> _data;
size_t size = width * height * img.channels();
_data.reset(new unsigned char[size], std::default_delete<unsigned char[]>());
cv::Mat resized(cv::Size(width, height), img.type(), _data.get());
if (width != img.cols || height != img.rows) {
cout << "Image is resized from (" << img.cols << ", " << img.rows << ") to (" << width << ", " << height
<< ")" << endl;
}
// cv::resize() just copy data to output image if sizes are the same
cv::resize(img, resized, cv::Size(width, height));
return _data;
}
WISH that will help you. ^^
西西里 带鱼 @.***
------------------ 原始邮件 ------------------ 发件人: "openvinotoolkit/openvino" @.>; 发送时间: 2023年9月22日(星期五) 下午4:43 @.>; 抄送: "西西里 @.**@.>; 主题: Re: [openvinotoolkit/openvino] opencv Mat conver to ov::Tensor (Issue #11531)
hello,how can I translate cv::mat to ov::tensor in openvino2.0? thanks
Hello, how did you solve the conversion? Can you tell me? Thank you
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
hello Thank you very much for your reply. I found the built-in method provided by openvino on their official website.
?? @.***
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2023年10月12日(星期四) 下午3:23 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [openvinotoolkit/openvino] opencv Mat conver to ov::Tensor (Issue #11531)
hello li sorry to reply so late. here is my solution below.
void segmentation_inference(int compiled_model_index, vector<cv::Mat> input_mats, vector<cv::Mat> ret_mats) {
std::shared_ptr<unsigned char&gt; input_data[1024];
ov::element::Type input_tensor_type = ov::element::u8;
const ov::Layout input_tensor_layout{ "NHWC" };
ov::Shape input_tensor_shape = { 1,
(unsigned long long)(*input_mats)[0].rows,
(unsigned long long)(*input_mats)[0].cols,
(unsigned long long)(*input_mats)[0].channels()
};
std::vector<ov::Tensor&gt; input_tensors;
for (int i = 0; i < (*input_mats).size(); ++i) {
input_data[i] = getData((*input_mats)[i]);
input_tensors.push_back(ov::Tensor(input_tensor_type, input_tensor_shape, input_data[i].get()));
}
ov::InferRequest infer_request = compiled_models[compiled_model_index].create_infer_request();
infer_request.set_input_tensors(input_tensors);
infer_request.infer();
const ov::Tensor&amp; output_tensor = infer_request.get_output_tensor();
*ret_mats = getSegRets(output_tensor.data<float&gt;(),
output_tensor.get_shape()[0],
output_tensor.get_shape()[3],
output_tensor.get_shape()[1],
output_tensor.get_shape()[2]);
}
shared_ptr<unsigned char> getData(cv::Mat& img) { int width = img.cols;
int height = img.rows;
if (img.depth() != CV_8U)
return NULL;
std::shared_ptr<unsigned char&gt; _data;
size_t size = width * height * img.channels();
_data.reset(new unsigned char[size], std::default_delete<unsigned char[]&gt;());
cv::Mat resized(cv::Size(width, height), img.type(), _data.get());
if (width != img.cols || height != img.rows) {
cout << "Image is resized from (" << img.cols << ", " << img.rows << ") to (" << width << ", " << height
<< ")" << endl;
}
// cv::resize() just copy data to output image if sizes are the same
cv::resize(img, resized, cv::Size(width, height));
return _data;
}
WISH that will help you. ^^
西西里 带鱼 @.***
------------------ 原始邮件 ------------------ 发件人: "openvinotoolkit/openvino" @.>; 发送时间: 2023年9月22日(星期五) 下午4:43 @.>; 抄送: "西西里 @.**@.>; 主题: Re: [openvinotoolkit/openvino] opencv Mat conver to ov::Tensor (Issue #11531)
hello,how can I translate cv::mat to ov::tensor in openvino2.0? thanks
Hello, how did you solve the conversion? Can you tell me? Thank you
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.> — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.>
hello would like to tell me the built-in method url in their official website? thank you.
西西里 带鱼 @.***
------------------ 原始邮件 ------------------ 发件人: "openvinotoolkit/openvino" @.>; 发送时间: 2023年10月12日(星期四) 下午3:58 @.>; 抄送: "西西里 @.**@.>; 主题: Re: [openvinotoolkit/openvino] opencv Mat conver to ov::Tensor (Issue #11531)
hello Thank you very much for your reply. I found the built-in method provided by openvino on their official website.
?? @.***
------------------ 原始邮件 ------------------
发件人: @.>;
发送时间: 2023年10月12日(星期四) 下午3:23
收件人: @.>;
抄送: @.>; @.>;
主题: Re: [openvinotoolkit/openvino] opencv Mat conver to ov::Tensor (Issue #11531)
hello li
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;sorry to reply so late. here is my solution below.
&nbsp; &nbsp; &nbsp; &nbsp; void segmentation_inference(int compiled_model_index, vector<cv::Mat&gt; input_mats, vector<cv::Mat&gt; ret_mats)
{
std::shared_ptr<unsigned char&amp;gt; input_data[1024];
ov::element::Type input_tensor_type = ov::element::u8;
const ov::Layout input_tensor_layout{ "NHWC" };
ov::Shape input_tensor_shape = { 1,
(unsigned long long)(*input_mats)[0].rows,
(unsigned long long)(*input_mats)[0].cols,
(unsigned long long)(*input_mats)[0].channels()
};
std::vector<ov::Tensor&amp;gt; input_tensors;
for (int i = 0; i < (*input_mats).size(); ++i) {
input_data[i] = getData((*input_mats)[i]);
input_tensors.push_back(ov::Tensor(input_tensor_type, input_tensor_shape, input_data[i].get()));
}
ov::InferRequest infer_request = compiled_models[compiled_model_index].create_infer_request();
infer_request.set_input_tensors(input_tensors);
infer_request.infer();
const ov::Tensor&amp;amp; output_tensor = infer_request.get_output_tensor();
*ret_mats = getSegRets(output_tensor.data<float&amp;gt;(),
output_tensor.get_shape()[0],
output_tensor.get_shape()[3],
output_tensor.get_shape()[1],
output_tensor.get_shape()[2]);
}
&nbsp; &nbsp; &nbsp; &nbsp;&nbsp;shared_ptr<unsigned char&gt; getData(cv::Mat&amp; img)
{
int width = img.cols;
int height = img.rows;
if (img.depth() != CV_8U)
return NULL;
std::shared_ptr<unsigned char&amp;gt; _data;
size_t size = width * height * img.channels();
_data.reset(new unsigned char[size], std::default_delete<unsigned char[]&amp;gt;());
cv::Mat resized(cv::Size(width, height), img.type(), _data.get());
if (width != img.cols || height != img.rows) {
cout << "Image is resized from (" << img.cols << ", " << img.rows << ") to (" << width << ", " << height
<< ")" << endl;
}
// cv::resize() just copy data to output image if sizes are the same
cv::resize(img, resized, cv::Size(width, height));
return _data;
}
WISH that will help you. ^^
西西里&nbsp;带鱼
@.***
&nbsp;
------------------&nbsp;原始邮件&nbsp;------------------
发件人: "openvinotoolkit/openvino" @.&gt;;
发送时间:&nbsp;2023年9月22日(星期五) 下午4:43
@.&gt;;
抄送:&nbsp;"西西里 @.**@.&gt;;
主题:&nbsp;Re: [openvinotoolkit/openvino] opencv Mat conver to ov::Tensor (Issue #11531)
hello,how can I translate cv::mat to ov::tensor in openvino2.0? thanks
Hello, how did you solve the conversion? Can you tell me? Thank you
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.&gt;
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.>
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.***>
hello
Time is too long, the url do not remember, the following is a reference to the official website to write the code: //set img data auto input_data = InferenceEngine::make_shared_blob<float>(input_info->getTensorDesc()); input_data->allocate(); memcpy(input_data->buffer(), in_frame.data, in_frame.total() * in_frame.elemSize());
the in_frame is opencv's mat and the input_info is network's inputinfo.
?? @.***
------------------ 原始邮件 ------------------ 发件人: "openvinotoolkit/openvino" @.>; 发送时间: 2023年10月12日(星期四) 下午5:27 @.>; @.**@.>; 主题: Re: [openvinotoolkit/openvino] opencv Mat conver to ov::Tensor (Issue #11531)
hello would like to tell me the built-in method url in their official website? thank you.
西西里 带鱼 @.***
------------------ 原始邮件 ------------------ 发件人: "openvinotoolkit/openvino" @.>; 发送时间: 2023年10月12日(星期四) 下午3:58 @.>; 抄送: "西西里 @.**@.>; 主题: Re: [openvinotoolkit/openvino] opencv Mat conver to ov::Tensor (Issue #11531)
hello&nbsp;Thank you very much for your reply. I found the built-in method provided by openvino on their official website.
&nbsp;
??
@.***
&nbsp;
------------------&nbsp;原始邮件&nbsp;------------------
发件人: @.&gt;;
发送时间: 2023年10月12日(星期四) 下午3:23
收件人: @.&gt;;
抄送: @.&gt;; @.&gt;;
主题: Re: [openvinotoolkit/openvino] opencv Mat conver to ov::Tensor (Issue #11531)
hello li
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;sorry to reply so late. here is my solution below.
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; void segmentation_inference(int compiled_model_index, vector<cv::Mat&amp;gt; input_mats, vector<cv::Mat&amp;gt; ret_mats)
{
std::shared_ptr<unsigned char&amp;amp;gt; input_data[1024];
ov::element::Type input_tensor_type = ov::element::u8;
const ov::Layout input_tensor_layout{ "NHWC" };
ov::Shape input_tensor_shape = { 1,
(unsigned long long)(*input_mats)[0].rows,
(unsigned long long)(*input_mats)[0].cols,
(unsigned long long)(*input_mats)[0].channels()
};
std::vector<ov::Tensor&amp;amp;gt; input_tensors;
for (int i = 0; i < (*input_mats).size(); ++i) {
input_data[i] = getData((*input_mats)[i]);
input_tensors.push_back(ov::Tensor(input_tensor_type, input_tensor_shape, input_data[i].get()));
}
ov::InferRequest infer_request = compiled_models[compiled_model_index].create_infer_request();
infer_request.set_input_tensors(input_tensors);
infer_request.infer();
const ov::Tensor&amp;amp;amp; output_tensor = infer_request.get_output_tensor();
*ret_mats = getSegRets(output_tensor.data<float&amp;amp;gt;(),
output_tensor.get_shape()[0],
output_tensor.get_shape()[3],
output_tensor.get_shape()[1],
output_tensor.get_shape()[2]);
}
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;shared_ptr<unsigned char&amp;gt; getData(cv::Mat&amp;amp; img)
{
int width = img.cols;
int height = img.rows;
if (img.depth() != CV_8U)
return NULL;
std::shared_ptr<unsigned char&amp;amp;gt; _data;
size_t size = width * height * img.channels();
_data.reset(new unsigned char[size], std::default_delete<unsigned char[]&amp;amp;gt;());
cv::Mat resized(cv::Size(width, height), img.type(), _data.get());
if (width != img.cols || height != img.rows) {
cout << "Image is resized from (" << img.cols << ", " << img.rows << ") to (" << width << ", " << height
<< ")" << endl;
}
// cv::resize() just copy data to output image if sizes are the same
cv::resize(img, resized, cv::Size(width, height));
return _data;
}
WISH that will help you. ^^
西西里&amp;nbsp;带鱼
@.***
&amp;nbsp;
------------------&amp;nbsp;原始邮件&amp;nbsp;------------------
发件人: "openvinotoolkit/openvino" @.&amp;gt;;
发送时间:&amp;nbsp;2023年9月22日(星期五) 下午4:43
@.&amp;gt;;
抄送:&amp;nbsp;"西西里 @.**@.&amp;gt;;
主题:&amp;nbsp;Re: [openvinotoolkit/openvino] opencv Mat conver to ov::Tensor (Issue #11531)
hello,how can I translate cv::mat to ov::tensor in openvino2.0? thanks
Hello, how did you solve the conversion? Can you tell me? Thank you
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.&amp;gt;
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.&gt;
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.>
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.>
hello,how can I translate cv::mat to ov::tensor in openvino2.0? thanks