PaddlePaddle / FastDeploy

⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end optimization, multi-platform and multi-framework support.
https://www.paddlepaddle.org.cn/fastdeploy
Apache License 2.0
2.92k stars 454 forks source link

服务端部署PPYoloe模型,在执行客户端请求时,报错 tritonclient.utils.InferenceServerException: [StatusCode.UNAVAILABLE] Socket closed #2299

Open huangjun11 opened 9 months ago

huangjun11 commented 9 months ago

温馨提示:根据社区不完全统计,按照模板提问,可以加快回复和解决问题的速度


环境

问题日志及出现问题的操作流程

Traceback (most recent call last): File "/home/jsjx/junhuang/FastDeploy/examples/vision/detection/paddledetection/serving/paddledet_grpc_client.py", line 103, in result = runner.Run([im, ]) File "/home/jsjx/junhuang/FastDeploy/examples/vision/detection/paddledetection/serving/paddledet_grpc_client.py", line 73, in Run results = self._client.infer( File "/home/jsjx/anaconda3/envs/tuobao/lib/python3.9/site-packages/tritonclient/grpc/_client.py", line 1380, in infer raise_error_grpc(rpc_error) File "/home/jsjx/anaconda3/envs/tuobao/lib/python3.9/site-packages/tritonclient/grpc/_utils.py", line 77, in raise_error_grpc raise get_error_grpc(rpc_error) from None 此时,服务断也被终止 图片

rainyfly commented 7 months ago

启动容器的时候指定--shm-size 把共享内存之类的设置大点呢