PaddlePaddle / PaddleDetection

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Apache License 2.0
12.66k stars 2.87k forks source link

mask rtdetr 导出模型 C++部署使用trt加载报错 #9152

Open yski opened 1 week ago

yski commented 1 week ago

问题确认 Search before asking

Bug组件 Bug Component

Deploy

Bug描述 Describe the Bug

-windows -c++ -导出环境:paddle3.0b1/paddledetection develop -推理环境 paddle inference 3.0.0 beat1 使用 cpu 加载模型:正常 使用 cuda 加载模型:正常 使用 trt 加载模型报错如下: image image

image

'''

C++ Traceback (most recent call last):

Not support stack backtrace yet.


Error Message Summary:

InvalidArgumentError: paddle::get failed, cannot get value (desc.GetAttr("dim")) by type class std::vector<int,class std::allocator >, its type is class std::vector<int64,class std::allocator<int64> >. (at C:\home\workspace\Paddle\paddle\fluid\inference\tensorrt\op_teller.cc:2329) '''

复现环境 Environment

-windows -c++ -导出环境:paddle3.0b1/paddledetection develop -推理环境 paddle inference 3.0.0 beat1

Bug描述确认 Bug description confirmation

是否愿意提交PR? Are you willing to submit a PR?

liu-jiaxuan commented 1 day ago

The error message Conversion to JSON format is not supported indicates that there is an issue with the result type being returned by the model, specifically the SegmentationResult. This suggests that the FastDeploy might not be able to serialize the output in the expected JSON format.  Ensure that the model you are using is compatible with the FastDeploy framework and TensorRT optimizations. Some models may have layers or operations that are not supported in the current FastDeploy version. Consider using a simpler model or re-evaluating the model architecture.If possible, update to the latest version.  Besides, you can debug the outputs of your model manually to ensure they are in the expected format.

liu-jiaxuan commented 1 day ago

Hi,根据您提供的错误信息,存在几个问题:

  1. 操作符兼容性:您的模型中的某些操作符在当前trt配置下可能不兼容,请对照文档检查您的模型,确保使用的操作符在trt中支持。
  2. 维度错误:出现的 InvalidArgumentError 提示可能是由于输入或输出维度不符合预期,请确认输入数据的维度是否与模型要求一致。 此外,您也可以尝试更新到最新的FastDeploy和trt版本~
yski commented 1 day ago

Hi,根据您提供的错误信息,存在几个问题:

  1. 操作符兼容性:您的模型中的某些操作符在当前trt配置下可能不兼容,请对照文档检查您的模型,确保使用的操作符在trt中支持。
  2. 维度错误:出现的 InvalidArgumentError 提示可能是由于输入或输出维度不符合预期,请确认输入数据的维度是否与模型要求一致。 此外,您也可以尝试更新到最新的FastDeploy和trt版本~

你好,我这边是使用教程导出的模型,在cuda和cpu模式下可以运行,RT-DETR系列模型trt导出在文档中有提及使用 --trt 参数,Mask-RTDETR导出代码是否忘记处理这部分了? 我这边使用的paddle_inference是这个版本,已经是官网最新的了 https://paddle-inference-lib.bj.bcebos.com/3.0.0-beta1/cxx_c/Windows/GPU/x86-64_cuda12.3_cudnn9.0.0_trt8.6.1.6_mkl_avx_vs2019/paddle_inference.zip

这个文件名对应的cuda12.3_cudnn9.0.0_trt8.6.1.6, 但是里面解压的后编译文件类容版显示是不一样的,我按照下面编译文件配置的cuda cudnn trt Paddle version: 3.0.0-beta1 GIT COMMIT ID: a842a0f40f6111fb0c2df218130d0560aa747bc8 WITH_MKL: ON WITH_ONEDNN: ON WITH_GPU: ON WITH_ROCM: OFF WITH_IPU: OFF CUDA version: 12.0 CUDNN version: v8.9 CXX compiler version: 19.29.30154.0 WITH_TENSORRT: ON TensorRT version: v8.6.1.6