Closed xinsuinizhuan closed 3 years ago
this is my result
you use which model,and you show you param in you project.
this is my result
you use which model,and you show you param in you project.
int main_RetinaFaceDectector()
{
RetinaFaceDectector m_RetinaFaceDectector;
Config m_config;
m_config.onnxModelpath = "E:\comm_Item\Item_done\onnx_tensorrt_pro\onnx_tensorrt_project-main\model\mxnet_onnx_tensorrt_retinaface\mnet.25-512x512-batchsize_1.onnx";
m_config.engineFile = "E:\comm_Item\Item_done\onnx_tensorrt_pro\onnx_tensorrt_project-main\model\mxnet_onnx_tensorrt_retinaface\mnet.25-512x512-fp32_batchsize_1.engine";
m_config.calibration_image_list_file = "E:\comm_Item\Item_done\onnx_tensorrt_pro\onnx_tensorrt_project-main\model\mxnet_onnx_tensorrt_retinaface\image\";
m_config.maxBatchSize = 1;
m_config.mode = 2;
m_config.calibration_width = 512;
m_config.calibration_height = 512;
m_config.conf_thresh = 0.2;
m_config.m_NMSThresh = 0.2;
m_RetinaFaceDectector.init(m_config);
std::vector
float all_time = 0.0;
time_t start = time(0);
Timer timer;
int m = 1000;
for (int i = 0; i < m; i++)
{
//timer.reset();
clock_t start, end;
timer.reset();
m_RetinaFaceDectector.detect(batch_img, batch_res);
//double t = timer.elapsed();
double t = timer.elapsed();
std::cout << i << ":" << t << "ms" << std::endl;
if (i > 0)
{
all_time += t;
}
}
std::cout << m << "次 time:" << all_time << " ms" << std::endl;
std::cout << "1次 time:" << all_time / m << " ms" << std::endl;
std::cout << "FPS::" << 1000 / (all_time / m) << std::endl;
//disp
for (int i = 0; i < batch_img.size(); ++i)
{
for (const auto& r : batch_res[i])
{
std::cout << "batch " << i << " prob:" << r.prob << " rect:" << r.rect << std::endl;
cv::rectangle(batch_img[i], r.rect, cv::Scalar(255, 0, 0), 2);
std::stringstream stream;
stream << std::fixed << std::setprecision(2) << " score:" << r.prob;
cv::putText(batch_img[i], stream.str(), cv::Point(r.rect.x, r.rect.y - 5), 0, 0.5, cv::Scalar(0, 0, 255), 2);
for (size_t j = 0; j < 5; j++) {
cv::Point2f pt = cv::Point2f(r.m_FacePts.x[j], r.m_FacePts.y[j]);
cv::circle(batch_img[i], pt, 1, cv::Scalar(0, 255, 0), 2);
}
}
cv::namedWindow("image" + std::to_string(i), cv::WINDOW_NORMAL);
cv::imshow("image" + std::to_string(i), batch_img[i]);
//cv::imwrite("E:\\comm_Item\\Item_done\\onnx_tensorrt_pro\\onnx_tensorrt_project-main\\model\\mxnet_onnx_tensorrt_retinaface\\result\\image" + std::to_string(i) +".png",batch_img[i]);
}
cv::waitKey();
return 0;
}
I use the defalut param model, the mnet.25-512x512-batchsize_1.onnx, this model should not right, i will try the retinaface_r50_v1-512x512-batchsize_1 model.
m_config.mode = 2 this mode is int8.you should changed m_config.mode = 0.int8 model should calibration.look https://github.com/ttanzhiqiang/onnx_tensorrt_project/blob/main/README.md .support FP32(m_config.mode = 0),FP16(m_config.mode = 1),INT8(m_config.mode = 2)
m_config.mode = 2 this mode is int8.you should changed m_config.mode = 0.int8 model should calibration.look https://github.com/ttanzhiqiang/onnx_tensorrt_project/blob/main/README.md .support FP32(m_config.mode = 0),FP16(m_config.mode = 1),INT8(m_config.mode = 2)
this model is int8? support FP32(m_config.mode = 0),FP16(m_config.mode = 1),INT8(m_config.mode = 2) then m_config.mode = 2 is right?
int8 need calibration image data.you should look int8 theory.like https://zhuanlan.zhihu.com/p/58182172
I see. tiny_tensorrt_onnx.vcxproj items all works. But tiny_tensorrt_dyn_onnx.vcxproj the model of model\yolov5\yolov5-v5_weight missing, i don't find them. Could share with me?
you can use yolov5l.sim.onnx.
I run the retinafece, it show this, so it is not right