PaddlePaddle / FastDeploy

⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end optimization, multi-platform and multi-framework support.
https://www.paddlepaddle.org.cn/fastdeploy
Apache License 2.0
2.81k stars 441 forks source link

fastdeploy推理ser_vi_layoutxlm_xfund_infer 和re_vi_layoutxlm_xfund_infer模型报错 #1544

Open ChengShuting opened 1 year ago

ChengShuting commented 1 year ago

Signal (11) received. 0# 0x000055618231A8A9 in fastdeployserver 1# 0x00007F1940F8D210 in /usr/lib/x86_64-linux-gnu/libc.so.6 2# 0x00007F19410D5885 in /usr/lib/x86_64-linux-gnu/libc.so.6 3# void paddle_infer::Tensor::CopyFromCpu(long const) in /opt/fastdeploy/third_libs/install/paddle_inference/paddle/lib/libpaddle_inference.so 4# fastdeploy::ShareTensorFromFDTensor(paddle_infer::Tensor, fastdeploy::FDTensor&) in /opt/fastdeploy/lib/libfastdeploy_runtime.so.1.0.4 5# fastdeploy::PaddleBackend::Infer(std::vector<fastdeploy::FDTensor, std::allocator >&, std::vector<fastdeploy::FDTensor, std::allocator >*, bool) in /opt/fastdeploy/lib/libfastdeploy_runtime.so.1.0.4 6# fastdeploy::Runtime::Infer() in /opt/fastdeploy/lib/libfastdeploy_runtime.so.1.0.4 7# 0x00007F1920705E94 in /opt/tritonserver/backends/fastdeploy/libtriton_fastdeploy.so 8# 0x00007F1920709726 in /opt/tritonserver/backends/fastdeploy/libtriton_fastdeploy.so 9# TRITONBACKEND_ModelInstanceExecute in /opt/tritonserver/backends/fastdeploy/libtriton_fastdeploy.so 10# 0x00007F1941B1783A in /opt/tritonserver/bin/../lib/libtritonserver.so 11# 0x00007F1941B1804D in /opt/tritonserver/bin/../lib/libtritonserver.so 12# 0x00007F19419CC801 in /opt/tritonserver/bin/../lib/libtritonserver.so 13# 0x00007F1941B11DC7 in /opt/tritonserver/bin/../lib/libtritonserver.so 14# 0x00007F194137BDE4 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6 15# 0x00007F19417F9609 in /usr/lib/x86_64-linux-gnu/libpthread.so.0 16# clone in /usr/lib/x86_64-linux-gnu/libc.so.6

Segmentation fault (core dumped)

heliqi commented 1 year ago

请问下你用的是1.0.4镜像吗? 换1.0.2镜像试试, 1.0.4镜像有已知bug正在修复中

ChengShuting commented 1 year ago

是用的1.0.4镜像,谢谢

ChengShuting commented 1 year ago

换成1.0.2镜像模型加载失败;1.0.4镜像模型可以加载成功,只是进行推理时报错 报错信息: I0308 03:28:16.633570 121 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f0e56000000' with size 268435456 I0308 03:28:16.635631 121 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864 I0308 03:28:16.643682 121 model_repository_manager.cc:1022] loading: re_vi_layoutxlm_xfund_infer:1 I0308 03:28:16.744703 121 model_repository_manager.cc:1022] loading: ser_vi_layoutxlm_xfund_infer:1 I0308 03:28:17.165979 121 fastdeploy_runtime.cc:1173] TRITONBACKEND_Initialize: fastdeploy I0308 03:28:17.166048 121 fastdeploy_runtime.cc:1182] Triton TRITONBACKEND API version: 1.6 I0308 03:28:17.166068 121 fastdeploy_runtime.cc:1187] 'fastdeploy' TRITONBACKEND API version: 1.6 I0308 03:28:17.166109 121 fastdeploy_runtime.cc:1216] backend configuration: {} I0308 03:28:17.168707 121 fastdeploy_runtime.cc:1246] TRITONBACKEND_ModelInitialize: re_vi_layoutxlm_xfund_infer (version 1) I0308 03:28:17.170379 121 fastdeploy_runtime.cc:1246] TRITONBACKEND_ModelInitialize: ser_vi_layoutxlm_xfund_infer (version 1) I0308 03:28:17.178275 121 fastdeploy_runtime.cc:1285] TRITONBACKEND_ModelInstanceInitialize: re_vi_layoutxlm_xfund_infer_0 (GPU device 0) [Paddle2ONNX] LodTensorArray is not supported. [Paddle2ONNX] Oops, there are some operators not supported yet, including bilinear_tensor_product,conditional_block,empty,lod_array_length,select_input,softmax_with_cross_entropy,tensor_array_to_tensor,while,write_to_array, [ERROR] Due to the unsupported operators, the conversion is aborted.

elonzh commented 7 months ago

升级到 1.0.6 版本后能够正常推理