Closed Tim5Tang closed 1 year ago
方便的话可以提供下导出模型的命令,以及使用的模型,这边复现下~,这里用的是slim模型吗?
前段时间过年了,帮忙看一下这问题 模型地址是官网的https://paddleseg.bj.bcebos.com/dygraph/cityscapes/rtformer_slim_cityscapes_1024x512_120k/model.pdparams 导出命令如下 python tools/export.py --config configs/rtformer/rtformer_slim_cityscapes_1024x512_120k.yml --model_path output/rtfomer/model.pdparams --save_dir output/infer_model --output_op argmax --input_shape 1 3 512 1024 我试了使用gpu可以正常推演,trt会报前面的错误
您好这边验证了下您提到的问题,首先 最重要的问题是模型导出命令出错,这里的-input_shape 必须是1 3 512 2048,参考Paddle Seg的config文档 先说下解决方案:
除此之外,还存在的几个问题
好的,我仔细看了下你的回答,非常专业谢谢,等你们解决这问题。
您好,这个p2o.Conv.26: two inputs (data and weights) are allowed only in explicit-quantization mode的问题现在有解决方案了吗?我在转trt的时候也遇到了
RTFomer 运行tensorrt报错,gpu可以正常运行
环境
~/data/deeplearning/program/deploy/FastDeploy/examples/vision/segmentation/paddleseg/python$ python infer.py --model RTFomer/ --image cityscapes_demo.png --device gpu --use_trt True [INFO] fastdeploy/vision/common/processors/transform.cc(93)::FuseNormalizeHWC2CHW Normalize and HWC2CHW are fused to NormalizeAndPermute in preprocessing pipeline. [INFO] fastdeploy/vision/common/processors/transform.cc(159)::FuseNormalizeColorConvert BGR2RGB and NormalizeAndPermute are fused to NormalizeAndPermute with swap_rb=1 [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log p2o.Conv.26: two inputs (data and weights) are allowed only in explicit-quantization mode. [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(637)::CreateTrtEngineFromOnnx Failed to parse ONNX model by TensorRT. [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(263)::InitFromOnnx Failed to create tensorrt engine. [ERROR] fastdeploy/runtime.cc(864)::CreateTrtBackend Load model from Paddle failed while initliazing TrtBackend. Aborted (core dumped)