Closed PaperGoat1210 closed 1 year ago
👋 Hello @PaperGoat1210, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!
Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.
Check out our YOLOv8 Docs for details and get started with:
pip install ultralytics
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!
@PaperGoat1210 hi there! That's an interesting project you're working on. To add voice feedback to YOLOv5's detect.py, you can modify the file to include text-to-speech functionality using libraries like pyttsx3 or gTTS for Python. Once an object is detected, you can trigger the voice feedback to announce the result. Hope this helps! Good luck with your project 🎤🔊
If i modify that detect.py file. Did i need to modify the others file which is inside the Yolov5 file, because there is an error when i modifed the code inside the detect.py
Hello @Killuagg,
Thank you for your question! To add voice feedback to the detect.py
file, you typically only need to modify this specific file. However, if you're encountering errors, it might be due to dependencies or interactions with other parts of the code.
Here are a few steps to help you troubleshoot and ensure your modifications work smoothly:
Ensure Compatibility: Make sure you are using the latest versions of torch
and YOLOv5 from the Ultralytics repository. This ensures you have the latest features and bug fixes.
Isolate the Issue: If you encounter an error, try to isolate it by running a minimal version of your modified code. This helps identify if the issue is within your modifications or elsewhere.
Dependencies: If you're using a text-to-speech library like pyttsx3
or gTTS
, ensure these libraries are correctly installed and imported in your detect.py
file.
Error Details: If you continue to face issues, please provide the specific error message and a minimum reproducible code example. This will help us understand the problem better and provide a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details.
Here's a basic example of how you might integrate pyttsx3
for voice feedback in detect.py
:
import pyttsx3
# Initialize the TTS engine
engine = pyttsx3.init()
# Your existing detection code...
for det in detections: # Assuming 'detections' is your list of detected objects
if det:
# Extract the label of the detected object
label = det['label']
# Generate voice feedback
engine.say(f"Detected {label}")
engine.runAndWait()
Feel free to adjust the code to fit your specific use case. If you need further assistance, please share more details about the error you're encountering.
Best of luck with your project, and thank you for being a part of the YOLO community! 😊
The code can run, but the voice feedback is not produce when the my traffic sign model is been detected.
TestVoice.py import pyttsx3
engine = pyttsx3.init() engine.say ("I will speak this") engine.runAndWait()
I try only to run that code to determine whether my Raspberry Pi can produce voice or not. The code run and produce voice... Why the code inside detect.py not produce and voice ouputs
Hello @Killuagg,
Thank you for reaching out and sharing the details of your issue. It's great to hear that your Raspberry Pi can produce voice output with your TestVoice.py
script. Let's work together to integrate this functionality into detect.py
.
First, let's ensure that your detect.py
modifications are correctly placed and that the text-to-speech (TTS) engine is being called when an object is detected. Here's a step-by-step guide to help you troubleshoot:
Verify Placement: Ensure that the TTS code is placed within the detection loop where objects are identified. This ensures that the voice feedback is triggered upon detection.
Check for Conditions: Make sure the conditions for triggering the TTS are met. For example, if you have specific conditions for traffic signs, ensure they are correctly implemented.
Debugging: Add print statements to verify that the detection code reaches the TTS call. This helps confirm that the code flow is correct.
Here's an example of how you might integrate the TTS code into detect.py
:
import pyttsx3
import torch
# Initialize the TTS engine
engine = pyttsx3.init()
# Your existing YOLOv5 detection code...
# Assuming 'detections' is your list of detected objects
for det in detections:
if det:
# Extract the label of the detected object
label = det['label']
print(f"Detected: {label}") # Debugging print statement
# Generate voice feedback
engine.say(f"Detected {label}")
engine.runAndWait()
Ensure Latest Versions: Please verify that you are using the latest versions of torch
and the YOLOv5 repository from Ultralytics. This ensures compatibility and includes the latest features and bug fixes.
Minimum Reproducible Example: If the issue persists, could you provide a minimum reproducible code example? This will help us better understand the problem and provide a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details.
By following these steps, we should be able to identify the issue and get your voice feedback working with YOLOv5. If you have any further questions or need additional assistance, please feel free to ask. We're here to help!
The problem have been solved by adding more code which are
detections = []
for *xyxy, conf, cls in reversed(det):
detections.append({'label': names[int(cls)]})
And also delete line if det:
Thank you for helping me
Hello @Killuagg,
Thank you for the update! I'm glad to hear that you've resolved the issue by adding the necessary code to handle detections and removing the conditional check.
Your solution of appending detected objects to the detections
list and ensuring the correct labels are assigned is a great approach. This will indeed ensure that the voice feedback is triggered appropriately for each detected object.
If you have any further questions or need additional assistance, feel free to reach out. The YOLO community and the Ultralytics team are always here to help. Keep up the great work on your project! 😊
Search before asking
Question
HI. I am currently trying to make a system that will have voice feedback after detecting an object. I still could not find the exact way to modify the detect.py file to make it have voice feedback.
Does anyone know how to modify the detect.py file of YOLOv5 to make it have voice feedback everytime it detects an object? Much appreciated.
Additional
No response