Closed athulvingt closed 1 year ago
If you are only for testing purposes, just use it directly. And if you want to deploy, you should export model firstly and then use deploy/python/infer.py
or deploy/cpp
to deploy.
I was following instruction on QUICK_STARTED_cn.md
python tools/infer.py -c configs/ppyolo/ppyolo.yml -o use_gpu=true weights=https://paddlemodels.bj.bcebos.com/object_detection/ppyolo.pdparams --infer_img=demo/000000014439.jpg
I downloaded ppyolo mode (weights and configuration file)l from model zoo and run
python tools/infer.py -c ppyolo/ppyolo.yml -o use_gpu=true weights=ppyolo/ppyolopdarams --infer_img=demo/000000014439.jpg
but I got file not found error - no file named ppyolo_reader.yml
This is caused by the lack of ppyolo_reader.yml. Please copy configs/ppyolo/ppyolo_reader.yml
to ppyolo
folder.
I downloaded PP-yolo_2x ResNet50vd ,input size 320 from ppyolo model zoo and exported it. While inferencing the model on a video it used only 1 thread of my 16 thread CPU, but the inference time was too low (0.88 FPS). I would like to have a better inference time. Am I doing something wrong?
You'd better use GPU to inference. Welcome to use PaddleDetection v2.6!
DO I have to export the models in modelzoo for deployment purpose or use it as it is