Open YouSmart2016 opened 1 year ago
请问使用的模型是我们文档提供的模型吗
请问使用的模型是我们文档提供的模型吗
是的,这个模型https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
请问使用的模型是我们文档提供的模型吗
是的,这个模型https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
python案例可以,C++的不行
温馨提示:根据社区不完全统计,按照模板提问,可以加快回复和解决问题的速度
环境
参考这个文档来的,https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/quick_start/models/cpp.md
问题日志及出现问题的操作流程
C:\Project\Paddle\PaddleDetection\build\Release>infer_demo.exe [INFO] fastdeploy/vision/common/processors/transform.cc(45)::fastdeploy::vision::FuseNormalizeCast Normalize and Cast are fused to Normalize in preprocessing pipeline. [INFO] fastdeploy/vision/common/processors/transform.cc(93)::fastdeploy::vision::FuseNormalizeHWC2CHW Normalize and HWC2CHW are fused to NormalizeAndPermute in preprocessing pipeline. [INFO] fastdeploy/vision/common/processors/transform.cc(159)::fastdeploy::vision::FuseNormalizeColorConvert BGR2RGB and NormalizeAndPermute are fused to NormalizeAndPermute with swap_rb=1 [INFO] fastdeploy/runtime/backends/openvino/ov_backend.cc(218)::fastdeploy::OpenVINOBackend::InitFromPaddle number of streams:1. [INFO] fastdeploy/runtime/backends/openvino/ov_backend.cc(228)::fastdeploy::OpenVINOBackend::InitFromPaddle affinity:YES. [INFO] fastdeploy/runtime/backends/openvino/ov_backend.cc(240)::fastdeploy::OpenVINOBackend::InitFromPaddle Compile OpenVINO model on device_name:CPU. [INFO] fastdeploy/runtime/runtime.cc(279)::fastdeploy::Runtime::CreateOpenVINOBackend Runtime initialized with Backend::OPENVINO in Device::CPU. DetectionResult: [xmin, ymin, xmax, ymax, score, label_id]
Visualized result save in vis_result.jpg
【模型精度问题】
vis_result.jpg 没有检测出来