Open sfraczek opened 1 year ago
Maybe you can generate inference dir of these models under the PaddleDetection 2.5. Then you can avoid these problem. To generate the inference model, you just need to execute the following command:
python tools/export_model.py -c config/model_name/model_config -o weights=weight_path_or_url
The last problem is caused by that Nomalize
op is renamed to NormalizeImage
under PaddleDetection 2.5. To avoid such problem, it is recommended you export the model from scratch.
Thank you very much. It seems it worked 🥳 for retinanet. Here are the steps I had to follow for future reference:
# download model from URL in configs/retinanet/README.md:
wget https://bj.bcebos.com/v1/paddledet/models/retinanet_r50_fpn_1x_coco.pdparams
python tools/export_model.py -c configs/retinanet/retinanet_r50_fpn_1x_coco.yml -o weights=retinanet_r50_fpn_1x_coco.pdparams
python deploy/python/infer.py --model_dir=output_inference/retinanet_r50_fpn_1x_coco/ --enable_mkldnn=True --run_benchmark=True --image_file=demo/000000014439.jpg
I will check with other models too.
This model is different than the old one. It's also 10 times slower, I don't understand. I found old instructions for older version of PaddleDetection like this:
PaddleDetection/weights/ResNet50_cos_pretrained
python static/tools/export_model.py -c static/configs/retinanet_r50_fpn_1x.yml --output_dir=./inference_model -o weights=weights/ResNet50_cos_pretrained
Here is side by side view of the models from 2.2 and 2.5:
Hi. We are testing some models on our CI using 2.4 static/deploy/python/infer.py. Since static is removed in release 2.5, Which app should we use now for testing both qat and fp32 versions of: faster_rcnn_r50_fpn_1x_coco retinanet_r50_fpn_1x yolov3_mobilenet yolov3_darknet
I tried replacing infer.py with this one deploy/python/infer.py but it results in errors. Could you help? For example I run this:
Which results in this:
So I added
use_dynamic_shape:false
to infer_cfg.yml and now I haveThen I changed
__model__
->inference.pdmodel
and__params__
->inference.pdiparams
and now I getIs there a documentation for converting the model to work with 2.5?