Open Linxuxin opened 1 year ago
@Linxuxin This is reasonable, because a low post-processing threshold will result in a lot of output bbox
@hhaAndroid Thanks a lot for your reply. I check my scripts and reuse the following model and default config to test the run time:
python ./demo/image_demo.py ./demo/demo.jpg ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py ./model/yolov5_n-v61_syncbn_fast_8xb16-300e_coco_20220919_090739-b804c1ad.pth time cost: 1114.0129566192627 ms
Based on the above model, I use the easydeploy tool to change the post-processing threshold and test again as follows:
Here, the inference time is obviously improved. However, there is still a no small gap comparing to the project: https://github.com/ultralytics/yolov5: python ./detect.py --weights yolov5n.pt --img 640 --conf 0.25 --source data/images time cost: 4.645109176635742 ms
Can you give me some explain or advice?
Can you explain now?
@hhaAndroid Thanks a lot for your reply. I check my scripts and reuse the following model and default config to test the run time:
python ./demo/image_demo.py ./demo/demo.jpg ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py ./model/yolov5_n-v61_syncbn_fast_8xb16-300e_coco_20220919_090739-b804c1ad.pth time cost: 1114.0129566192627 ms
Based on the above model, I use the easydeploy tool to change the post-processing threshold and test again as follows:
- convert to onnx file python ./projects/easydeploy/tools/export.py ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py ./model/yolov5_n-v61_syncbn_fast_8xb16-300e_coco_20220919_090739-b804c1ad.pth --work-dir work_dirs/yolov5_n-v61_syncbn_fast_8xb16-300e_coco --img-size 640 640 --batch 1 --device cuda:0 --simplify --opset 11 --backend 1 --pre-topk 10 --keep-topk 10 --iou-threshold 0.65 --score-threshold 0.25
- run the model python projects/easydeploy/tools/image-demo.py ./demo/demo.jpg ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py work_dirs/yolov5_n-v61_syncbn_fast_8xb16-300e_coco/end2end.onnx --device cuda:0 time cost: 49.04508590698242 ms
Here, the inference time is obviously improved. However, there is still a no small gap comparing to the project: https://github.com/ultralytics/yolov5: python ./detect.py --weights yolov5n.pt --img 640 --conf 0.25 --source data/images time cost: 4.645109176635742 ms
Can you give me some explain or advice?
@hhaAndroid Thanks a lot for your reply. I check my scripts and reuse the following model and default config to test the run time:
python ./demo/image_demo.py ./demo/demo.jpg ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py ./model/yolov5_n-v61_syncbn_fast_8xb16-300e_coco_20220919_090739-b804c1ad.pth time cost: 1114.0129566192627 ms
Based on the above model, I use the easydeploy tool to change the post-processing threshold and test again as follows:
- convert to onnx file python ./projects/easydeploy/tools/export.py ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py ./model/yolov5_n-v61_syncbn_fast_8xb16-300e_coco_20220919_090739-b804c1ad.pth --work-dir work_dirs/yolov5_n-v61_syncbn_fast_8xb16-300e_coco --img-size 640 640 --batch 1 --device cuda:0 --simplify --opset 11 --backend 1 --pre-topk 10 --keep-topk 10 --iou-threshold 0.65 --score-threshold 0.25
- run the model python projects/easydeploy/tools/image-demo.py ./demo/demo.jpg ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py work_dirs/yolov5_n-v61_syncbn_fast_8xb16-300e_coco/end2end.onnx --device cuda:0 time cost: 49.04508590698242 ms
Here, the inference time is obviously improved. However, there is still a no small gap comparing to the project: https://github.com/ultralytics/yolov5: python ./detect.py --weights yolov5n.pt --img 640 --conf 0.25 --source data/images time cost: 4.645109176635742 ms
Can you give me some explain or advice?
yolov5 project uses multithread mode to read images,but mmyolo not do it.
Prerequisite
🐞 Describe the bug
I run the ./demo/image_demo.py in the latest mmyolo project by using following config, model and script: config: ./configs/yolov5/voc/yolov5_n-v61_fast_1xb64-50e_voc.py model: https://download.openmmlab.com/mmyolo/v0/yolov5/yolov5_n-v61_fast_1xb64-50e_voc/yolov5_n-v61_fast_1xb64-50e_voc_20221017_234254-f1493430.pth image: ./demo/demo.jpg and found the inference time is slow by adding the testing code in image_demo.py as follows:
script: python ./demo/image_demo.py ./demo/demo.jpg ./configs/yolov5/voc/yolov5_n-v61_fast_1xb64-50e_voc.py ./model/yolov5_n-v61_fast_1xb64-50e_voc_20221017_234254-f1493430.pth The result is: time cost: 1094.2001342773438 ms
I compare another yolov5 project: https://github.com/ultralytics/yolov5 in the same environment and run the detect.py by using following model and script: model: https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt script: python ./detect.py --weights yolov5n.pt --img 640 --conf 0.25 --source data/images
data/images only contains a demo.jpg provided in mmyolo/demo
The result is: time cost: 4.645109176635742 ms
Please help me and point out where I went wrong.
Environment
sys.platform: linux Python: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] CUDA available: True numpy_random_seed: 2147483648 GPU 0,1: NVIDIA TITAN V CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 11.3, V11.3.109 GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 PyTorch: 1.12.1 PyTorch compiling details: PyTorch built with:
TorchVision: 0.13.1 OpenCV: 4.7.0 MMEngine: 0.7.2 MMCV: 2.0.0 MMDetection: 3.0.0 MMYOLO: 0.5.0+dc85144
Additional information
No response