ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.18k stars 16.19k forks source link

yolov5 FPS boost #10496

Closed ozicmoi closed 1 year ago

ozicmoi commented 1 year ago

Search before asking

Question

Hello there. I trained my own custom dataset using yolov5s. I developed a license plate recognition system. I am using python's Flask web framework in the web environment. While the FPS value is 5 on the USB camera, I get 1 FPS when I run the project on ip webcam and mp4 files. Can you help me increase the FPS value?

Thank you @jkocherhans @adrianholovaty @cgerum @farleylai @glenn-jocher @Nioolek

while the project is running https://vimeo.com/781019379

Additional

No response

JustasBart commented 1 year ago

Hi, I'm not sure in terms of what the setup/requirements are etc...

But I always run my inferences through C++ by using the OpenCV's dnn module that was built with CUDA and CuDNN. In my case I can easily out-run a 50FPS 1080p Camera even with the YOLOv5l6 @1280x1280 so it's fast alright...

I'm using an NVidia Quadro RTX 4000.

Perhaps one way for you might be to use a NCS2? Again hard to make a recommendation without knowing the full context...

Hope any of this helps you at all, good luck! :rocket:

glenn-jocher commented 1 year ago

@ozlematiz 👋 Hello! Thanks for asking about inference speed issues. PyTorch Hub speeds will vary by hardware, software, model, inference settings, etc. Our default example in Colab with a V100 looks like this:

Screen Shot 2022-05-03 at 10 20 39 AM

YOLOv5 🚀 can be run on CPU (i.e. --device cpu, slow) or GPU if available (i.e. --device 0, faster). You can determine your inference device by viewing the YOLOv5 console output:

detect.py inference

python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/
Screen Shot 2022-05-03 at 2 48 42 PM

YOLOv5 PyTorch Hub inference

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Images
dir = 'https://ultralytics.com/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')]  # batch of images

# Inference
results = model(imgs)
results.print()  # or .show(), .save()
# Speed: 631.5ms pre-process, 19.2ms inference, 1.6ms NMS per image at shape (2, 3, 640, 640)

Increase Speeds

If you would like to increase your inference speed some options are:

Good luck 🍀 and let us know if you have any other questions!

github-actions[bot] commented 1 year ago

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!