ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
51.2k stars 16.43k forks source link

How to use in Real Time Detection with Raspberry pi #7207

Closed AykeeSalazar closed 2 years ago

AykeeSalazar commented 2 years ago

Search before asking

Question

The main problem occurs with the exportation of .onnx or ,tflite. The output should be the box coordinates, classes and probability. But this is what we are getting using the exported files.

[[[8.82683601e-03 1.04521690e-02 1.46837346e-02 2.08171289e-02 5.84549234e-05 9.99996662e-01] [8.61773174e-03 8.73271376e-03 1.40674748e-02 3.37835997e-02 3.67832872e-05 9.99998748e-01] [9.02960170e-03 1.17815696e-02 1.79009605e-02 2.67236307e-02 2.24798368e-05 9.99998212e-01] ... [9.58094060e-01 9.68974948e-01 8.65654871e-02 7.52424821e-02 2.92385084e-05 9.99998629e-01] [9.59904492e-01 9.66234565e-01 1.00124203e-01 1.06151395e-01 1.20118702e-05 9.99998212e-01] [9.62235630e-01 9.67338860e-01 3.23688149e-01 2.58288354e-01 6.81636666e-05 9.99998271e-01]]]

We used OpenCV DNN for this. Can someone help us? We are finishing our thesis with a real time detection of public smoking. I pre-trained our data and the results are working property with detect.py but moving to open-cv, detect,py is no longer presented. and again, the output should be the box coordinates, classes and probability.

Additional

I found some related questions but there is no definite answers or other issues are stale already.

glenn-jocher commented 2 years ago

@AykeeSalazar I think you are referring to output after NMS. Mode output itself will be something like 1x25200x85, where 85 is a vector of xywh, objectness, 80 class confidences.

Detect.py and PyTorch Hub includes NMS in postprocessing to only provide detections.

Simple Inference Example

This example loads a pretrained YOLOv5s model from PyTorch Hub as model and passes an image for inference. 'yolov5s' is the YOLOv5 'small' model. For details on all available models please see the README. Custom models can also be loaded, including custom trained PyTorch models and their exported variants, i.e. ONNX, TensorRT, TensorFlow, OpenVINO YOLOv5 models.

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')  # or yolov5m, yolov5l, yolov5x, etc.
# model = torch.hub.load('ultralytics/yolov5', 'custom', 'path/to/best.pt')  # custom trained model

# Images
im = 'https://ultralytics.com/images/zidane.jpg'  # or file, Path, URL, PIL, OpenCV, numpy, list

# Inference
results = model(im)

# Results
results.print()  # or .show(), .save(), .crop(), .pandas(), etc.

results.xyxy[0]  # im predictions (tensor)
results.pandas().xyxy[0]  # im predictions (pandas)
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

See YOLOv5 PyTorch Hub Tutorial for details.

Good luck 🍀 and let us know if you have any other questions!

AykeeSalazar commented 2 years ago

@AykeeSalazar I think you are referring to output after NMS. Mode output itself will be something like 1x25200x85, where 85 is a vector of xywh, objectness, 80 class confidences.

Detect.py and PyTorch Hub includes NMS in postprocessing to only provide detections.

Simple Inference Example

This example loads a pretrained YOLOv5s model from PyTorch Hub as model and passes an image for inference. 'yolov5s' is the YOLOv5 'small' model. For details on all available models please see the README. Custom models can also be loaded, including custom trained PyTorch models and their exported variants, i.e. ONNX, TensorRT, TensorFlow, OpenVINO YOLOv5 models.

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')  # or yolov5m, yolov5l, yolov5x, etc.
# model = torch.hub.load('ultralytics/yolov5', 'custom', 'path/to/best.pt')  # custom trained model

# Images
im = 'https://ultralytics.com/images/zidane.jpg'  # or file, Path, URL, PIL, OpenCV, numpy, list

# Inference
results = model(im)

# Results
results.print()  # or .show(), .save(), .crop(), .pandas(), etc.

results.xyxy[0]  # im predictions (tensor)
results.pandas().xyxy[0]  # im predictions (pandas)
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

See YOLOv5 PyTorch Hub Tutorial for details.

Good luck 🍀 and let us know if you have any other questions!

It worked! But I think it cannot detect well compared with the pre-trained.

Pre-trained: image

Our custom: image

github-actions[bot] commented 2 years ago

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!