PaddlePaddle / FastDeploy

⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end optimization, multi-platform and multi-framework support.
https://www.paddlepaddle.org.cn/fastdeploy
Apache License 2.0
2.81k stars 441 forks source link

同一张图片paddleseg python下推理和导出模型后,在fastdeploy下差别很大 #1148

Closed yueyue0574 closed 1 year ago

yueyue0574 commented 1 year ago

温馨提示:根据社区不完全统计,按照模板提问,可以加快回复和解决问题的速度


环境

同一张图片paddleseg python下推理和导出模型后,在fastdeploy下差别很大,paddleseg下能区分左右道路和中间隔离栏,fastdeploy下前端黏连 原图: roi paddleseg 项目自带的推理结果 paddleseg FastDeploy 推理结果 fastdeploy FastDeploy 推理代码 void PaddleSegTrtInfer(const std::string& model_dir, const std::string& image_file) { auto model_file = model_dir + sep + "model.pdmodel"; auto params_file = model_dir + sep + "model.pdiparams"; auto config_file = model_dir + sep + "deploy.yaml";

auto option = fastdeploy::RuntimeOption();
option.UseGpu();
option.UseTrtBackend();
option.EnableTrtFP16();
option.SetTrtCacheFile("bin/paddleseg_road2/trtcache_paddleseg_road_fp16");
/*option.UseCpu();
option.UsePaddleBackend();*/
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
    model_file, params_file, config_file, option);

if (!model.Initialized()) {
    std::cerr << "Failed to initialize." << std::endl;
    return;
}

auto im = cv::imread(image_file);
fastdeploy::vision::SegmentationResult res;
if (!model.Predict(&im, &res)) {
    std::cerr << "Failed to predict." << std::endl;
    return;
}

std::cout << res.Str() << std::endl;
auto vis_im = fastdeploy::vision::VisSegmentation(im, res, 0.5);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;

}

用的configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml训练

训练模型和导出后模型地址; 链接:https://pan.baidu.com/s/186wnFcRmGfiT5v9sF1aPFw?pwd=fear 提取码:fear

yueyue0574 commented 1 year ago

补充一下,今天用python版本fastdeploy推理结果也是一样 vis_image 推理代码: `import cv2 import fastdeploy.vision as vision

im = cv2.imread("imgs/roi.jpg") model = vision.segmentation.PaddleSegModel("bin/paddleseg_road/model.pdmodel", "bin/paddleseg_road/model.pdiparams", "bin/paddleseg_road/deploy.yaml")

result = model.predict(im) print(result)

vis_im = vision.vis_segmentation(im, result, weight=0.5) cv2.imwrite("vis_image.jpg", vis_im)`

yueyue0574 commented 1 year ago

补充下,导出模型时候不加--input_shape,在fastdeploy推理就正常,之前导出时候加了--input_shape 1 3 512 512 vis_image

felixhjh commented 1 year ago

补充下,导出模型时候不加--input_shape,在fastdeploy推理就正常,之前导出时候加了--input_shape 1 3 512 512 vis_image

说一下FastDeploy里面的逻辑,因为如果导出的seg模型指定input_shape,那么在输入FastDeploy中推理的时候,输入图片又不等于input_shape时,就会resize成为导出时指定的input_shape,结果的mask最终又会Resize回input_shape,这就会造成精度损失。所以,一般如果是在导出模型指定input_shape,也就默认推理时输入图片的shape是一个固定值(和input_shape最好一致,这部分是由用户自己在传入FastDeploy前操作的),FastDeploy中加入的上面提到的Resize这层逻辑只是为了兜底,我这边会加个注释,感谢~,有问题再反馈哈,会及时跟进

yueyue0574 commented 1 year ago

补充下,导出模型时候不加--input_shape,在fastdeploy推理就正常,之前导出时候加了--input_shape 1 3 512 512 vis_image

说一下FastDeploy里面的逻辑,因为如果导出的seg模型指定input_shape,那么在输入FastDeploy中推理的时候,输入图片又不等于input_shape时,就会resize成为导出时指定的input_shape,结果的mask最终又会Resize回input_shape,这就会造成精度损失。所以,一般如果是在导出模型指定input_shape,也就默认推理时输入图片的shape是一个固定值(和input_shape最好一致,这部分是由用户自己在传入FastDeploy前操作的),FastDeploy中加入的上面提到的Resize这层逻辑只是为了兜底,我这边会加个注释,感谢~,有问题再反馈哈,会及时跟进

感谢