Closed MohamedKHALILRouissi closed 4 months ago
👋 Hello @MohamedKHALILRouissi, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
boxes =! bounded boxes , rather then zone boxes for detection
@MohamedKHALILRouissi absolutely, happy to clarify the distinction for you! 😊 In the context of YOLOv8 and computer vision, "bounded boxes" typically refer to the boxes that are directly output by the object detection model, wrapping around detected objects. On the other hand, "zone boxes" or areas you define for monitoring specific activities (like the ones you mentioned for triggering alerts when an object enters a zone) are not a direct output of the model but are instead implemented at the application level.
You can use the coordinates of the detected bounding boxes (from YOLO) and check if they intersect with your predefined "zone boxes". This allows you to generate unique alerts based on your specific conditions. Here's a quick pseudo-code snippet to give you an idea:
detected_boxes = yolo_detection(image) # This returns detected bounding boxes from YOLO
zone_boxes = define_your_zones() # Define your zones somewhere in your code
for detected_box in detected_boxes:
for zone_box in zone_boxes:
if intersects(detected_box, zone_box): # Assuming 'intersects' is a function you've defined to check for intersection
trigger_alert(zone_box.id, detected_box.class) # custom function to handle alerts
This way, each "zone box" can have a unique ID and be associated with specific classes of objects as detected by YOLOv8, allowing for sophisticated and custom alerting mechanisms. Hope this helps clear things up! 🚀
Thank you for your prompt response, and I apologize for any inconvenience caused. Additionally, I would like to inquire about the implementation of the define_your_zones() function. I'm struggling to grasp how to draw the boxes, whether from the frontend or backend, and how to incorporate the ability to draw multiple boxes in the frame, each with a distinct identifier.
@MohamedKHALILRouissi no worries at all! 😊 I'm happy to help. Drawing multiple "zone boxes" and giving them unique identifiers can be accomplished in a few steps. If you're working with a web-based frontend, you might want to handle the drawing aspect there, as it offers more interactivity with the user.
Here's a simplified approach:
Here's an example structure for the JSON data representing a zone box:
{
"zones": [
{
"id": "zone1",
"coordinates": {"x1": 100, "y1": 150, "x2": 200, "y2": 300},
"alertClass": "person"
},
{
"id": "zone2",
"coordinates": {"x1": 300, "y1": 100, "x2": 400, "y2": 250},
"alertClass": "vehicle"
}
]
}
And, an example of a function to check if a detected object intersects with any zones (pseudo-code):
def check_zones(detected_objects, zones):
for obj in detected_objects:
for zone in zones:
if intersects(obj['bbox'], zone['coordinates']):
print(f"Alert for {zone['alertClass']} in {zone['id']}")
This process allows users to create and edit multiple zone boxes dynamically, and have those zones considered in your object detection logic. I hope this gives you a clearer path forward!
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
Question
I need to process a real-time video stream and apply object detection. Additionally, I want to integrate the following features and would appreciate guidance on their implementation:
Thank you for your assistance! , am kinda stuck on how to draw multiple boxes in the frame and associate it to unique id so that for each return from yolo is unique to each box
Additional
No response