Closed elecani closed 2 months ago
👋 Hello @elecani, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!
Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.
Check out our YOLOv8 Docs for details and get started with:
pip install ultralytics
@elecani hello,
Thank you for reaching out! If you want to limit the detection to specific classes in YOLOv5, you can achieve this by modifying the --classes
argument when running inference. This argument allows you to specify which classes you want the model to detect.
Here's an example of how you can do this:
python detect.py --source path/to/your/images --weights path/to/your/weights --classes 0 1 2
In this example, --classes 0 1 2
will limit the detection to classes 0, 1, and 2. You can adjust the class indices based on the classes you are interested in.
If you are using the YOLOv5 API in a Python script, you can set the classes
parameter in the detect
function like this:
from yolov5 import YOLOv5
# Initialize YOLOv5 model
model = YOLOv5("path/to/your/weights")
# Perform inference with class limitation
results = model.detect("path/to/your/images", classes=[0, 1, 2])
This will limit the detection to the specified classes.
If you have any further questions or need additional assistance, feel free to ask. The YOLO community and the Ultralytics team are here to help! 😊
Thanks a lot, this is gonna be helpful
I have another question. I want to know the instructions to get and print the coordinates of x and y of the bounding boxes.
Thanks a lot, this is gonna be helpful
I have another question. I want to know the instructions to get and print the coordinates of x and y of the bounding boxes.
Hi, i'm just a lame stranger, not a member of yolo support team like Glenn, but my code may be useful:
image_folder = "/test/images"
label_folder = "/test/labels"
bbox_detector = YOLO("agi_models_storage/yolov32/ep_80_bs_32_pentagon_true/aliens_det/weights/best.pt")
image_files = [f for f in os.listdir(image_folder)]
for img_name in image_files:
results = bbox_detector(path.join(image_folder, img_name), verbose=False)
for r in results:
boxes = r.boxes.xyxy.cpu().numpy() # You need .cpu() part only if model works on GPU. There are also xywh options and some others.
for b in boxes:
print(b)
Hello @satyrmipt,
Thank you for your kind words! I'm glad to hear that the previous information was helpful. Regarding your new question on obtaining and printing the coordinates of the bounding boxes, I'd be happy to assist.
To extract and print the coordinates of the bounding boxes using YOLOv5, you can utilize the results.xyxy
attribute, which provides the bounding box coordinates in the format [x1, y1, x2, y2]
. Here's an example of how you can achieve this:
import torch
# Load the YOLOv5 model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
# Perform inference
img = 'https://ultralytics.com/images/zidane.jpg'
results = model(img)
# Extract and print bounding box coordinates
for result in results.xyxy[0]: # results.xyxy[0] contains the bounding boxes for the first image
x1, y1, x2, y2, conf, cls = result
print(f"Bounding Box: x1={x1}, y1={y1}, x2={x2}, y2={y2}, Confidence: {conf}, Class: {cls}")
torch.hub.load
.results
object contains the inference results.results.xyxy[0]
attribute contains the bounding box coordinates for the first image. Each bounding box is represented by [x1, y1, x2, y2, conf, cls]
, where x1, y1
are the top-left coordinates, x2, y2
are the bottom-right coordinates, conf
is the confidence score, and cls
is the class index.Feel free to adapt this code to your specific use case. If you encounter any issues or have further questions, please don't hesitate to ask. The YOLO community and the Ultralytics team are always here to help! 😊
For more detailed information, you can also refer to the YOLOv5 Quickstart Tutorial.
Happy coding and detecting! 🚀
I've tried this code " results = model.detect("path/to/your/images", classes=[0, 1, 2]) " using rpi4 and a webcam in real time, but it didn't work for me. I guess the names of the variables of the camputer verison of yolo are not the same for the raspberry pi version. Can you please tell me what should I do for the RPI version for printing the coordinates x and y? I'm so sorry for not mentioning that I'm using RPI4.
Hello @elecani,
Thank you for your message and for providing details about your setup. No worries about mentioning the Raspberry Pi 4 (RPI4) later; we're here to help!
To assist you better, could you please provide a minimum reproducible code example? This will help us understand the issue more clearly and offer a precise solution. You can refer to our guide on creating a minimum reproducible example here: Minimum Reproducible Example. It's crucial for us to reproduce the bug before we can investigate a solution.
Additionally, please ensure you are using the latest versions of torch
and YOLOv5 from the Ultralytics GitHub repository. If you haven't updated recently, please do so and try running your code again.
For real-time inference on an RPI4 using a webcam, you might need to adapt the code slightly. Here's an example of how you can capture frames from a webcam and print the bounding box coordinates:
import torch
import cv2
# Load the YOLOv5 model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
# Initialize webcam
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Perform inference
results = model(frame)
# Extract and print bounding box coordinates
for result in results.xyxy[0]: # results.xyxy[0] contains the bounding boxes for the current frame
x1, y1, x2, y2, conf, cls = result
print(f"Bounding Box: x1={x1}, y1={y1}, x2={x2}, y2={y2}, Confidence: {conf}, Class: {cls}")
# Display the frame with bounding boxes (optional)
results.show()
# Break loop on 'q' key press
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
torch.hub.load
.cv2.VideoCapture(0)
initializes the webcam.results
object contains the inference results for each frame.results.xyxy[0]
attribute contains the bounding box coordinates for the current frame.results.show()
.Please try this code on your RPI4 and let us know if it resolves your issue. If you encounter any problems or have further questions, feel free to share more details, and we'll be happy to assist you further.
Thank you for your patience and cooperation. Happy detecting! 😊
Thank you very much for your kindness I really appreciate it.
What I'm trying to do is to command a servomotor that contains a camera, and trying to make the camera capture a video of a specific object centered in the middle using the servomotor. for example if the object is in the left, the servomotor will turn to the the left until the object will be centered. So I guess what I have to do is to get the maximum of x and y, get the coordinates of the bounding box, and finally making conditions in the code to center the object in the middle. What do you think about this?
I've tried the code that you have sent to me and it worked, but it captured separated images approximately every 2 seconds . I will get the instructions in this code and add them in there appropriate place in the detect.py file, and also adding the Hardware code in it, or writing the hardware code and importing the detect.py file in it.
Hello @elecani,
Thank you for your detailed explanation and for your kind words! 😊 Your project sounds fascinating, and it's great to hear that the provided code worked for you. Let's work together to refine your approach and ensure smooth real-time object tracking with your servomotor and camera setup.
To achieve real-time object tracking and control your servomotor based on the object's position, you'll need to continuously capture frames from the camera, perform inference, and adjust the servomotor accordingly. Here's a more integrated approach to help you achieve this:
import torch
import cv2
import time
# Load the YOLOv5 model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
# Initialize webcam
cap = cv2.VideoCapture(0)
# Function to control servomotor (placeholder, replace with actual control code)
def control_servomotor(x_center, frame_center):
if x_center < frame_center - 20: # Adjust threshold as needed
print("Turn left")
# Add your servomotor control code here
elif x_center > frame_center + 20: # Adjust threshold as needed
print("Turn right")
# Add your servomotor control code here
else:
print("Centered")
# Add your servomotor control code here
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Perform inference
results = model(frame)
# Extract bounding box coordinates
for result in results.xyxy[0]: # results.xyxy[0] contains the bounding boxes for the current frame
x1, y1, x2, y2, conf, cls = result
x_center = (x1 + x2) / 2
y_center = (y1 + y2) / 2
print(f"Bounding Box Center: x={x_center}, y={y_center}")
# Control servomotor based on object's x_center
frame_center = frame.shape[1] / 2
control_servomotor(x_center, frame_center)
# Display the frame with bounding boxes (optional)
results.show()
# Break loop on 'q' key press
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
torch.hub.load
.cv2.VideoCapture(0)
initializes the webcam.results
object contains the inference results for each frame.results.xyxy[0]
attribute contains the bounding box coordinates for the current frame.control_servomotor
function adjusts the servomotor based on the object's position relative to the frame center.results.show()
.control_servomotor
function with your actual servomotor control code.If you encounter any issues or have further questions, please provide a minimum reproducible code example as mentioned earlier. This will help us understand the problem better and offer a precise solution. You can refer to our guide on creating a minimum reproducible example here: Minimum Reproducible Example.
Thank you for your patience and cooperation. The YOLO community and the Ultralytics team are always here to support you. Happy coding and tracking! 🚀
Hello,
I'm sorry for the late reply. It works well but it is still capturing images every 3 seconds instead of streaming. And I want to know in this instruction " if x_center < frame_center - 20: ", what does 20 mean? And the editing of this value depends on what?
Hello @elecani,
Thank you for your patience and for providing more details about your issue. I'm glad to hear that the code is working, but I understand that you want to achieve smoother real-time streaming.
The delay in capturing images every 3 seconds might be due to the processing load on your RPI4. Here are a few suggestions to improve the frame rate:
yolov5n
for nano) which is faster and more suitable for edge devices like the RPI4.Here's an updated version of the code with these optimizations:
import torch
import cv2
import time
# Load a lighter YOLOv5 model
model = torch.hub.load('ultralytics/yolov5', 'yolov5n')
# Initialize webcam with lower resolution
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 320)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 240)
# Function to control servomotor (placeholder, replace with actual control code)
def control_servomotor(x_center, frame_center):
if x_center < frame_center - 20: # Adjust threshold as needed
print("Turn left")
# Add your servomotor control code here
elif x_center > frame_center + 20: # Adjust threshold as needed
print("Turn right")
# Add your servomotor control code here
else:
print("Centered")
# Add your servomotor control code here
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Perform inference
results = model(frame)
# Extract bounding box coordinates
for result in results.xyxy[0]: # results.xyxy[0] contains the bounding boxes for the current frame
x1, y1, x2, y2, conf, cls = result
x_center = (x1 + x2) / 2
y_center = (y1 + y2) / 2
print(f"Bounding Box Center: x={x_center}, y={y_center}")
# Control servomotor based on object's x_center
frame_center = frame.shape[1] / 2
control_servomotor(x_center, frame_center)
# Display the frame with bounding boxes (optional)
results.show()
# Break loop on 'q' key press
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Regarding the line if x_center < frame_center - 20:
, the value 20
is a threshold that determines how sensitive the servomotor control is to the object's position. This value can be adjusted based on your specific requirements:
You can experiment with different values to find the optimal threshold for your setup.
If you continue to experience issues or have further questions, please provide a minimum reproducible code example as mentioned earlier. This will help us understand the problem better and offer a precise solution. You can refer to our guide on creating a minimum reproducible example here: Minimum Reproducible Example.
Thank you for your patience and cooperation. The YOLO community and the Ultralytics team are always here to support you. Happy coding and tracking! 🚀
good morning @glenn-jocher
I hope you'redoing great. I'm sorry for my late reply I was stuck with my dissertation.
I just want to tell you that it worked, and I tried to edit it and implement it in my rpi4 to controll the robot's mouvement.
Thank you so much for your kindness I really appreciate.
good luck.
Good morning @elecani,
Thank you for your kind words and for updating us on your progress! 😊 I'm delighted to hear that the solution worked for you and that you've successfully implemented it on your RPI4 to control your robot's movement. Your project sounds incredibly exciting!
If you encounter any further questions or need additional assistance as you continue to develop your project, please don't hesitate to reach out. The YOLO community and the Ultralytics team are always here to support you.
Best of luck with your dissertation and your robotics project! 🚀
Warm
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
Question
Hi,
I want to know the instruction that can limit the detection of class in YOLOv5.
Thank you.
Additional
No response