PaddlePaddle / PaddleDetection

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Apache License 2.0
12.86k stars 2.9k forks source link

[2.4->2.5 migration] how to use infer.py after static folder is removed #7714

Open sfraczek opened 1 year ago

sfraczek commented 1 year ago

Hi. We are testing some models on our CI using 2.4 static/deploy/python/infer.py. Since static is removed in release 2.5, Which app should we use now for testing both qat and fp32 versions of: faster_rcnn_r50_fpn_1x_coco retinanet_r50_fpn_1x yolov3_mobilenet yolov3_darknet

I tried replacing infer.py with this one deploy/python/infer.py but it results in errors. Could you help? For example I run this:

python3.10 deploy/python/infer.py --model_dir=../retinanet_r50_fpn_1x/ --enable_mkldnn=True --run_benchmark=True --image_file=demo/000000014439.jpg

Which results in this:

------------------------------------------
Traceback (most recent call last):
  File "/mnt/drive/PaddlePaddle/PaddleDetection/deploy/python/infer.py", line 1034, in <module>
    main()
  File "/mnt/drive/PaddlePaddle/PaddleDetection/deploy/python/infer.py", line 968, in main
    detector = eval(detector_func)(
  File "/mnt/drive/PaddlePaddle/PaddleDetection/deploy/python/infer.py", line 103, in __init__
    self.pred_config = self.set_config(model_dir)
  File "/mnt/drive/PaddlePaddle/PaddleDetection/deploy/python/infer.py", line 126, in set_config
    return PredictConfig(model_dir)
  File "/mnt/drive/PaddlePaddle/PaddleDetection/deploy/python/infer.py", line 741, in __init__
    self.use_dynamic_shape = yml_conf['use_dynamic_shape']
KeyError: 'use_dynamic_shape'

So I added use_dynamic_shape:false to infer_cfg.yml and now I have

ValueError: Cannot find any inference model in dir: ../retinanet_r50_fpn_1x/,

Then I changed __model__ -> inference.pdmodel and __params__ -> inference.pdiparams and now I get

Traceback (most recent call last):
  File "/usr/lib/python3.10/pdb.py", line 1726, in main
    pdb._runscript(mainpyfile)
  File "/usr/lib/python3.10/pdb.py", line 1586, in _runscript
    self.run(statement)
  File "/usr/lib/python3.10/bdb.py", line 597, in run
    exec(cmd, globals, locals)
  File "<string>", line 1, in <module>
  File "/mnt/drive/PaddlePaddle/PaddleDetection/deploy/python/infer.py", line 1034, in <module>
    main()
  File "/mnt/drive/PaddlePaddle/PaddleDetection/deploy/python/infer.py", line 1002, in main
    detector.predict_image(
  File "/mnt/drive/PaddlePaddle/PaddleDetection/deploy/python/infer.py", line 362, in predict_image
    inputs = self.preprocess(batch_image_list)  # warmup
  File "/mnt/drive/PaddlePaddle/PaddleDetection/deploy/python/infer.py", line 133, in preprocess
    preprocess_ops.append(eval(op_type)(**new_op_info))
  File "<string>", line 1, in <module>
NameError: name 'Normalize' is not defined
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> <string>(1)<module>()

Is there a documentation for converting the model to work with 2.5?

wangxinxin08 commented 1 year ago

Maybe you can generate inference dir of these models under the PaddleDetection 2.5. Then you can avoid these problem. To generate the inference model, you just need to execute the following command:

python tools/export_model.py -c config/model_name/model_config -o weights=weight_path_or_url

The last problem is caused by that Nomalize op is renamed to NormalizeImage under PaddleDetection 2.5. To avoid such problem, it is recommended you export the model from scratch.

sfraczek commented 1 year ago

Thank you very much. It seems it worked 🥳 for retinanet. Here are the steps I had to follow for future reference:

# download model from URL in configs/retinanet/README.md:
wget https://bj.bcebos.com/v1/paddledet/models/retinanet_r50_fpn_1x_coco.pdparams
python tools/export_model.py -c configs/retinanet/retinanet_r50_fpn_1x_coco.yml -o weights=retinanet_r50_fpn_1x_coco.pdparams
python deploy/python/infer.py --model_dir=output_inference/retinanet_r50_fpn_1x_coco/ --enable_mkldnn=True --run_benchmark=True --image_file=demo/000000014439.jpg

I will check with other models too.

sfraczek commented 1 year ago

This model is different than the old one. It's also 10 times slower, I don't understand. I found old instructions for older version of PaddleDetection like this:

Retinanet_r50_fpn_1x fp32 analysis

Save static fp32 model

  1. Download the pretrained weights
    Open static/config/retinanet_r50_fpn_1x.yml, you will find this link to download pretrained weiths https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_cos_pretrained.tar. Download it and save to PaddleDetection/weights/ResNet50_cos_pretrained
  2. Save the static model
    python static/tools/export_model.py -c static/configs/retinanet_r50_fpn_1x.yml --output_dir=./inference_model  -o weights=weights/ResNet50_cos_pretrained

Here is side by side view of the models from 2.2 and 2.5: image