open-mmlab / mmyolo

OpenMMLab YOLO series toolbox and benchmark. Implemented RTMDet, RTMDet-Rotated,YOLOv5, YOLOv6, YOLOv7, YOLOv8,YOLOX, PPYOLOE, etc.
https://mmyolo.readthedocs.io/zh_CN/dev/
GNU General Public License v3.0
2.98k stars 536 forks source link

why the inference time on a single image is slow? #748

Open Linxuxin opened 1 year ago

Linxuxin commented 1 year ago

Prerequisite

🐞 Describe the bug

I run the ./demo/image_demo.py in the latest mmyolo project by using following config, model and script: config: ./configs/yolov5/voc/yolov5_n-v61_fast_1xb64-50e_voc.py model: https://download.openmmlab.com/mmyolo/v0/yolov5/yolov5_n-v61_fast_1xb64-50e_voc/yolov5_n-v61_fast_1xb64-50e_voc_20221017_234254-f1493430.pth image: ./demo/demo.jpg and found the inference time is slow by adding the testing code in image_demo.py as follows:

    start_time = time.time()
    result = inference_detector(model, file)
    end_time = time.time()
    print("time cost:", float(end_time - start_time) * 1000.0, "ms")

script: python ./demo/image_demo.py ./demo/demo.jpg ./configs/yolov5/voc/yolov5_n-v61_fast_1xb64-50e_voc.py ./model/yolov5_n-v61_fast_1xb64-50e_voc_20221017_234254-f1493430.pth The result is: time cost: 1094.2001342773438 ms

I compare another yolov5 project: https://github.com/ultralytics/yolov5 in the same environment and run the detect.py by using following model and script: model: https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt script: python ./detect.py --weights yolov5n.pt --img 640 --conf 0.25 --source data/images

data/images only contains a demo.jpg provided in mmyolo/demo

The result is: time cost: 4.645109176635742 ms

Please help me and point out where I went wrong.

Environment

sys.platform: linux Python: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] CUDA available: True numpy_random_seed: 2147483648 GPU 0,1: NVIDIA TITAN V CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 11.3, V11.3.109 GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 PyTorch: 1.12.1 PyTorch compiling details: PyTorch built with:

TorchVision: 0.13.1 OpenCV: 4.7.0 MMEngine: 0.7.2 MMCV: 2.0.0 MMDetection: 3.0.0 MMYOLO: 0.5.0+dc85144

Additional information

No response

hhaAndroid commented 1 year ago

@Linxuxin This is reasonable, because a low post-processing threshold will result in a lot of output bbox

Linxuxin commented 1 year ago

@hhaAndroid Thanks a lot for your reply. I check my scripts and reuse the following model and default config to test the run time:

python ./demo/image_demo.py ./demo/demo.jpg ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py ./model/yolov5_n-v61_syncbn_fast_8xb16-300e_coco_20220919_090739-b804c1ad.pth time cost: 1114.0129566192627 ms

Based on the above model, I use the easydeploy tool to change the post-processing threshold and test again as follows:

  1. convert to onnx file python ./projects/easydeploy/tools/export.py ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py ./model/yolov5_n-v61_syncbn_fast_8xb16-300e_coco_20220919_090739-b804c1ad.pth --work-dir work_dirs/yolov5_n-v61_syncbn_fast_8xb16-300e_coco --img-size 640 640 --batch 1 --device cuda:0 --simplify --opset 11 --backend 1 --pre-topk 10 --keep-topk 10 --iou-threshold 0.65 --score-threshold 0.25
  2. run the model python projects/easydeploy/tools/image-demo.py ./demo/demo.jpg ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py work_dirs/yolov5_n-v61_syncbn_fast_8xb16-300e_coco/end2end.onnx --device cuda:0 time cost: 49.04508590698242 ms

Here, the inference time is obviously improved. However, there is still a no small gap comparing to the project: https://github.com/ultralytics/yolov5: python ./detect.py --weights yolov5n.pt --img 640 --conf 0.25 --source data/images time cost: 4.645109176635742 ms

Can you give me some explain or advice?

KNzzzzzzzzzz commented 9 months ago

Can you explain now?

@hhaAndroid Thanks a lot for your reply. I check my scripts and reuse the following model and default config to test the run time:

python ./demo/image_demo.py ./demo/demo.jpg ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py ./model/yolov5_n-v61_syncbn_fast_8xb16-300e_coco_20220919_090739-b804c1ad.pth time cost: 1114.0129566192627 ms

Based on the above model, I use the easydeploy tool to change the post-processing threshold and test again as follows:

  1. convert to onnx file python ./projects/easydeploy/tools/export.py ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py ./model/yolov5_n-v61_syncbn_fast_8xb16-300e_coco_20220919_090739-b804c1ad.pth --work-dir work_dirs/yolov5_n-v61_syncbn_fast_8xb16-300e_coco --img-size 640 640 --batch 1 --device cuda:0 --simplify --opset 11 --backend 1 --pre-topk 10 --keep-topk 10 --iou-threshold 0.65 --score-threshold 0.25
  2. run the model python projects/easydeploy/tools/image-demo.py ./demo/demo.jpg ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py work_dirs/yolov5_n-v61_syncbn_fast_8xb16-300e_coco/end2end.onnx --device cuda:0 time cost: 49.04508590698242 ms

Here, the inference time is obviously improved. However, there is still a no small gap comparing to the project: https://github.com/ultralytics/yolov5: python ./detect.py --weights yolov5n.pt --img 640 --conf 0.25 --source data/images time cost: 4.645109176635742 ms

Can you give me some explain or advice?

imangoa commented 7 months ago

@hhaAndroid Thanks a lot for your reply. I check my scripts and reuse the following model and default config to test the run time:

python ./demo/image_demo.py ./demo/demo.jpg ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py ./model/yolov5_n-v61_syncbn_fast_8xb16-300e_coco_20220919_090739-b804c1ad.pth time cost: 1114.0129566192627 ms

Based on the above model, I use the easydeploy tool to change the post-processing threshold and test again as follows:

  1. convert to onnx file python ./projects/easydeploy/tools/export.py ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py ./model/yolov5_n-v61_syncbn_fast_8xb16-300e_coco_20220919_090739-b804c1ad.pth --work-dir work_dirs/yolov5_n-v61_syncbn_fast_8xb16-300e_coco --img-size 640 640 --batch 1 --device cuda:0 --simplify --opset 11 --backend 1 --pre-topk 10 --keep-topk 10 --iou-threshold 0.65 --score-threshold 0.25
  2. run the model python projects/easydeploy/tools/image-demo.py ./demo/demo.jpg ./configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py work_dirs/yolov5_n-v61_syncbn_fast_8xb16-300e_coco/end2end.onnx --device cuda:0 time cost: 49.04508590698242 ms

Here, the inference time is obviously improved. However, there is still a no small gap comparing to the project: https://github.com/ultralytics/yolov5: python ./detect.py --weights yolov5n.pt --img 640 --conf 0.25 --source data/images time cost: 4.645109176635742 ms

Can you give me some explain or advice?

yolov5 project uses multithread mode to read images,but mmyolo not do it.