PaddlePaddle / PaddleDetection

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Apache License 2.0
12.8k stars 2.89k forks source link

车辆检测部署 #1449

Closed Sunyingbin closed 4 years ago

Sunyingbin commented 4 years ago

你好,我用提供好的车辆检测模型尝试部署,在paddledetection上预测是好用的,然后我想用AnalysisPredictor方式来部署,我想实现一个用websocket实时推流的方式,对摄像头下的车辆实时监测。 通过

python tools/export_serving_model.py -c contrib/VehicleDetection/vehicle_yolov3_darknet.yml --output_dir=./inference_model -o weights=vehicle_yolov3_darknet

导出模型的方式,现在返回如下问题:

2020-09-17 11:32:46,980-WARNING: vehicle_yolov3_darknet.pdparams not found, try to load model file saved with [ save_params, save_persistables, save_vars ]
2020-09-17 11:32:48,602-INFO: save_inference_model pruned unused feed variables im_id
2020-09-17 11:32:48,602-INFO: Export serving model to ./inference_model, client side: ./inference_model\serving_client, server side: ./inference_model\serving_server. input: ['image', 'im_size'], output: ['multiclass_nms_0.tmp_0']...

命令语法不正确。

Traceback (most recent call last):
  File "tools/export_serving_model.py", line 105, in <module>
    main()
  File "tools/export_serving_model.py", line 92, in main
    save_serving_model(FLAGS, exe, feed_vars, test_fetches, infer_prog)
  File "tools/export_serving_model.py", line 62, in save_serving_model
    main_program=infer_prog)
  File "D:\anaconda3\envs\paddle\lib\site-packages\paddle_serving_client\io\__init__.py", line 98, in save_model
    "w") as fout:
FileNotFoundError: [Errno 2] No such file or directory: './inference_model\\vehicle_yolov3_darknet\\serving_client/serving_client_conf.prototxt'
liuhuiCNN commented 4 years ago

vehicle_yolov3_darkne模型权重是以tar为后缀的,PaddleDetection默认后缀是pdparams。请您设置全路径尝试以下。 或者使用网络下载路径: python tools/export_serving_model.py -c contrib/VehicleDetection/vehicle_yolov3_darknet.yml --output_dir=./inference_model -o weights=https://paddlemodels.bj.bcebos.com/object_detection/vehicle_yolov3_darknet.tar

Sunyingbin commented 4 years ago

好的,我试一下,非常感谢!!!

Sunyingbin commented 4 years ago

2020-09-21 11:47:04,253-INFO: Downloading pedestrian_yolov3_darknet.tar from https://paddlemodels.bj.bcebos.com/object_detection/pedestrian_yolov3_darknet.tar 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240900/240900 [00:29<00:00, 8250.86KB/s] 2020-09-21 11:47:33,780-INFO: Decompressing C:\Users\paddle/.cache/paddle/weights\pedestrian_yolov3_darknet.tar... 2020-09-21 11:47:34,252-WARNING: C:\Users\paddle/.cache/paddle/weights\pedestrian_yolov3_darknet.pdparams not found, try to load model file saved with [ save_params, save_persistables, save_vars ] 2020-09-21 11:47:35,708-WARNING: variable yolo_output.2.conv.bias not used 2020-09-21 11:47:35,708-WARNING: variable yolo_output.1.conv.bias not used 2020-09-21 11:47:35,708-WARNING: variable yolo_output.0.conv.weights not used 2020-09-21 11:47:35,708-WARNING: variable yolo_output.2.conv.weights not used 2020-09-21 11:47:35,708-WARNING: variable yolo_output.0.conv.bias not used 2020-09-21 11:47:35,708-WARNING: variable yolo_output.1.conv.weights not used 2020-09-21 11:47:35,956-INFO: save_inference_model pruned unused feed variables im_id 2020-09-21 11:47:35,957-INFO: Export serving model to ./inference_model, client side: ./inference_model\serving_client, server side: ./inference_model\serving_server. input: ['image', 'im_size'], output: ['multiclass_nms_0.tmp_0']... 命令语法不正确。 Traceback (most recent call last): File "tools/export_serving_model.py", line 105, in main() File "tools/export_serving_model.py", line 92, in main save_serving_model(FLAGS, exe, feed_vars, test_fetches, infer_prog) File "tools/export_serving_model.py", line 62, in save_serving_model main_program=infer_prog) File "D:\anaconda3\envs\paddle\lib\site-packages\paddle_serving_client\io__init__.py", line 98, in save_model "w") as fout: FileNotFoundError: [Errno 2] No such file or directory: './inference_model\vehicle_yolov3_darknet\serving_client/serving_client_conf.prototxt'

还是报错,图片尺寸好像也有问题

liuhuiCNN commented 4 years ago

您可以先参考这个文档试试 https://github.com/liuhuiCNN/PaddleDetection/tree/doc_add_pipeline/deploy/serving

Sunyingbin commented 4 years ago

我按照参考文档python tools/infer.py -c configs/yolov3_mobilenet_v1_roadsign.yml -o use_gpu=true weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_roadsign.pdparams --infer_img=demo/road554.png预测报错: Traceback (most recent call last): File "tools/infer.py", line 261, in main() File "tools/infer.py", line 91, in main cfg = load_config(FLAGS.config) File "D:\workspace\PaddleDetection\ppdet\core\workspace.py", line 86, in load_config cfg = merge_config(yaml.load(f, Loader=yaml.Loader), cfg) File "D:\anaconda3\envs\paddle\lib\site-packages\yaml__init.py", line 112, in load loader = Loader(stream) File "D:\anaconda3\envs\paddle\lib\site-packages\yaml\loader.py", line 44, in init Reader.init(self, stream) File "D:\anaconda3\envs\paddle\lib\site-packages\yaml\reader.py", line 85, in init__ self.determine_encoding() File "D:\anaconda3\envs\paddle\lib\site-packages\yaml\reader.py", line 124, in determine_encoding self.update_raw() File "D:\anaconda3\envs\paddle\lib\site-packages\yaml\reader.py", line 178, in update_raw data = self.stream.read(size) UnicodeDecodeError: 'gbk' codec can't decode byte 0xa1 in position 41: illegal multibyte sequence

Sunyingbin commented 4 years ago

还是帮我看看windows上为啥不能导出吧,要不然我就用ai studios上导出了

Sunyingbin commented 4 years ago

你好,inference module在aistudio上导出来了,然后部署预测报错了,好像是不匹配啊

Sunyingbin commented 4 years ago

-----------  Running Arguments -----------
camera_id: -1
image_file: ./contrib/VehicleDetection/demo/001.jpeg
model_dir: ./inference_model/vehicle_yolov3_darknet/
output_dir: output
run_benchmark: False
run_mode: fluid
threshold: 0.5
use_gpu: True
video_file: 
------------------------------------------
-----------  Model Configuration -----------
Model Arch: YOLO
Use Padddle Executor: False
Transform Order: 
--transform op: Resize
--transform op: Normalize
--transform op: Permute
--------------------------------------------
Traceback (most recent call last):
  File "deploy/python/infer.py", line 670, in <module>
    predict_image()
  File "deploy/python/infer.py", line 559, in predict_image
    FLAGS.model_dir, use_gpu=FLAGS.use_gpu, run_mode=FLAGS.run_mode)
  File "deploy/python/infer.py", line 433, in __init__
    use_gpu=use_gpu)
  File "deploy/python/infer.py", line 378, in load_predictor
    predictor = fluid.core.create_paddle_predictor(config)
paddle.fluid.core_avx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::string paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::AnalysisPredictor::LoadProgramDesc()
3   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
4   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
5   std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
6   std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Cannot open file ./inference_model/vehicle_yolov3_darknet/__model__ at (/paddle/paddle/fluid/inference/api/analysis_predictor.cc:706)
qingqing01 commented 4 years ago

@Sunyingbin

  1. 如果你是本地,非服务化部署的话,使用tools/export_model.py导出模型,不是tools/export_serving_model.py。
  2. 麻烦看下 ./inference_model/vehicle_yolov3_darknet/__model__ 这个文件是否存在。
Sunyingbin commented 4 years ago

还有一层,我明白你的意思了,我导出的方式不对是吧

Sunyingbin commented 4 years ago

我目前可以在linux上或者在aistudio上将inference导出来(windows上导出目前还在优化),然后在windows上部署,多谢指正@qingqing01