ultralytics / ultralytics

NEW - YOLOv8 πŸš€ in PyTorch > ONNX > OpenVINO > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
28.95k stars 5.72k forks source link

How to be able to count objects regardless of line when yolov8 objects are detected #11854

Closed noinsung closed 3 months ago

noinsung commented 4 months ago

Search before asking

Question

Hello.

I'm trying to modify the function so that instead of yolov8 object count counting objects in and out every time an object passes by on the line, it counts objects every time a certain class is detected across the frame.

For example, I would like to write a code to detect a specific class, and whenever an object such as car, bus, etc. is detected in video or webcam, I would like to modify it like this in the upper right corner: car:1 bus:1.

How do I fix the object_counter.py code?

The code below is the code currently used.

======================================================== from ultralytics import YOLO from ultralytics.solutions import object_counter import cv2

model = YOLO("yolov8n.pt") cap = cv2.VideoCapture("C:/Users/user/ultralytics-main/4K Video of Highway Traffic!.mp4") assert cap.isOpened(), "Error reading video file" w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

line_points = [(0, 720), (1280, 720), (1280, 0), (0, 0)] # line or region points

[(40, 500), (1220, 500), (1080, 360), (100, 360)]

classes_to_count = [0, 2, 3, 5, 7] # person and car classes for count

Video writer

video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))

Init Object Counter

counter = object_counter.ObjectCounter() counter.set_args(view_img=True, reg_pts=line_points, classes_names=model.names, draw_tracks=True, line_thickness=2)

while cap.isOpened(): success, im0 = cap.read() if not success: print("Video frame is empty or video processing has been successfully completed.") break tracks = model.track(im0, persist=True, show=False, classes=classes_to_count)

im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)

cap.release() video_writer.release() cv2.destroyAllWindows()

Additional

No response

github-actions[bot] commented 4 months ago

πŸ‘‹ Hello @noinsung, thank you for your interest in Ultralytics YOLOv8 πŸš€! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a πŸ› Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 4 months ago

Hello!

To modify your existing object_counter.py code so that it counts the total occurrences of specific object classes across a video or webcam stream, you can make a few adjustments. Essentially, you'll want to track counts outside the frames looping and display them continuously.

Here's a simplified way to adapt your code:

from ultralytics import YOLO
import cv2

model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path_to_video.mp4")
assert cap.isOpened(), "Error reading video file"

object_counts = {}  # Dictionary to maintain counts of each class
classes_to_count = [2, 5]  # Assuming '2' = car, '5' = bus for example

while cap.isOpened():
    success, frame = cap.read()
    if not success:
        break
    results = model.predict(frame)
    for result in results:
        if result.cls in classes_to_count:
            label = model.names[int(result.cls)]
            if label in object_counts:
                object_counts[label] += 1
            else:
                object_counts[label] = 1

    # Display counts on the frame
    display_text = ' '.join([f"{k}:{v}" for k, v in object_counts.items()])
    cv2.putText(frame, display_text, (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,255,255), 2, cv2.LINE_AA)
    cv2.imshow('Frame', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

This modification tracks overall counts of each specified class across all frames and displays this count in the upper-left corner (which you can move to the upper-right or anywhere else). Adjust class indices in classes_to_count as needed per your model's classes!

Let me know if this addresses your requirements! πŸš—πŸšŒ

noinsung commented 4 months ago

hello.

As a result of running the modified code, the following error occurs. The current version of ultralytics is 8.2.14.

========================================

(E:\Anaconda3_envs\yolov8) C:\Users\user\ultralytics-main\ultralytics>python test.py

0: 384x640 (no detections), 68.5ms Speed: 2.0ms preprocess, 68.5ms inference, 653.6ms postprocess per image at shape (1, 3, 384, 640) Traceback (most recent call last): File "test.py", line 17, in if result.cls in classes_to_count: File "E:\Anaconda3_envs\yolov8\lib\site-packages\ultralytics\utils__init.py", line 160, in getattr raise AttributeError(f"'{name}' object has no attribute '{attr}'. See valid attributes below.\n{self.doc__}") AttributeError: 'Results' object has no attribute 'cls'. See valid attributes below.

A class for storing and manipulating inference results.

Attributes:
    orig_img (numpy.ndarray): Original image as a numpy array.
    orig_shape (tuple): Original image shape in (height, width) format.
    boxes (Boxes, optional): Object containing detection bounding boxes.
    masks (Masks, optional): Object containing detection masks.
    probs (Probs, optional): Object containing class probabilities for classification tasks.
    keypoints (Keypoints, optional): Object containing detected keypoints for each object.
    speed (dict): Dictionary of preprocess, inference, and postprocess speeds (ms/image).
    names (dict): Dictionary of class names.
    path (str): Path to the image file.

Methods:
    update(boxes=None, masks=None, probs=None, obb=None): Updates object attributes with new detection results.
    cpu(): Returns a copy of the Results object with all tensors on CPU memory.
    numpy(): Returns a copy of the Results object with all tensors as numpy arrays.
    cuda(): Returns a copy of the Results object with all tensors on GPU memory.
    to(*args, **kwargs): Returns a copy of the Results object with tensors on a specified device and dtype.
    new(): Returns a new Results object with the same image, path, and names.
    plot(...): Plots detection results on an input image, returning an annotated image.
    show(): Show annotated results to screen.
    save(filename): Save annotated results to file.
    verbose(): Returns a log string for each task, detailing detections and classifications.
    save_txt(txt_file, save_conf=False): Saves detection results to a text file.
    save_crop(save_dir, file_name=Path("im.jpg")): Saves cropped detection images.
    tojson(normalize=False): Converts detection results to JSON format.
glenn-jocher commented 4 months ago

Hello!

It looks like you are trying to access attributes directly from a Results object. In the context of your error, Results objects don't have a direct cls attribute. Instead, you should extract this information from the boxes attribute, which contains the cls attributes for individual detections.

Here’s how you can modify your code snippet to correctly access the cls attribute from the detections:

results = model.predict(frame)
for result in results.boxes:
    if result.cls in classes_to_count:

Make sure your classes_to_count only includes integers representing class indices. If you have further issues or need more adjustments, feel free to ask!

Happy coding! 😊

github-actions[bot] commented 3 months ago

πŸ‘‹ Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO πŸš€ and Vision AI ⭐