ultralytics / ultralytics

Ultralytics YOLO11 šŸš€
https://docs.ultralytics.com
GNU Affero General Public License v3.0
31.49k stars 6.05k forks source link

AttributeError on accessing attribute of 'NoneType' in YOLO tracking with Ultralytics library #3399

Closed lwensveen closed 11 months ago

lwensveen commented 1 year ago

Search before asking

YOLOv8 Component

Detection

Bug

I'm using the Ultralytics YOLO library for object detection and tracking. However, I'm encountering an issue where my program crashes with an AttributeError when there are no detected objects in a frame.

Here's the error message I receive:

ids = result.boxes.id.cpu().numpy().astype(int)
AttributeError: 'NoneType' object has no attribute 'cpu'

This error occurs when there are no detections (result.boxes is None). I thought I had correctly handled this scenario with the following code:

for result in results:
    if result.boxes is None:  # Expecting to skip this iteration if no objects are detected.
        continue

However, it appears that the AttributeError occurs regardless of this check. Could anyone provide insight into why this continue statement isn't preventing the error when result.boxes is None?

Environment

Minimal Reproducible Example


from ultralytics import YOLO
import cv2

model = YOLO('yolov8n.pt')

def detect_objects(current_frame, current_tracked_ids, current_tracked_classes):
    results = model.track(source=current_frame,
                          classes=[0, 2, 3],
                          device='cpu',
                          conf=0.6,
                          iou=0.5,
                          show=True,
                          stream=True,
                          persist=True)

    new_ids = []
    new_cls = []
    for result in results:
        if result.boxes is None:
            continue

        boxes = result.boxes.data

        ids = result.boxes.id.cpu().numpy().astype(int)
        cls = result.boxes.cls.cpu().numpy().astype(int)

        for box, obj_id, cls in zip(boxes, ids, cls):
            if obj_id not in current_tracked_ids:
                new_ids.append(obj_id)
            if obj_id not in current_tracked_classes:
                new_cls.append(cls)

    current_tracked_ids.update(new_ids)
    current_tracked_classes.update(new_cls)

    return current_frame, current_tracked_ids, current_tracked_classes

cap = cv2.VideoCapture(0)

tracked_ids = set()
tracked_classes = set()
while True:
    ret, frame = cap.read()
    if not ret:
        break

    frame, tracked_ids, tracked_classes = detect_objects(frame, tracked_ids, tracked_classes)

    print(f'Total cars detected: {tracked_ids}')

    if cv2.waitKey(1) == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

Additional

@AyushExel @Laughing-q

Are you willing to submit a PR?

github-actions[bot] commented 1 year ago

šŸ‘‹ Hello @lwensveen, thank you for your interest in YOLOv8 šŸš€! We recommend a visit to the YOLOv8 Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a šŸ› Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ā“ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord šŸŽ§ community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.7 environment with PyTorch>=1.7.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 1 year ago

@lwensveen your error indicates that boxes.id is None, not boxes itself. Your continue logic only checks whether boxes is None.

lwensveen commented 1 year ago

@glenn-jocher

I've tried adding other checks before, but the error persists:

if result.boxes is not None and result.boxes.id is not None:
if result.boxes is not None and hasattr(result.boxes, 'id') and result.boxes.id is not None:
    continue

This is the first time I've ever written anything in python, so I'm sorry if I'm mistaken in my presumptions.

glenn-jocher commented 1 year ago

@lwensveen,

It seems that you are encountering an issue where AttributeError: 'NoneType' object has no attribute 'cpu' occurs when an image has no detections.

It seems that your continue logic checks whether the result.boxes attribute is None, but not whether either result itself or the id attribute of result.boxes are None.

You might want to try incorporating the following if statement to check id attribute of result.boxes as well.

if result.boxes is None or result.boxes.id is None:

I hope that helps!

lwensveen commented 1 year ago

@glenn-jocher

But that's what I did in my previous post? It didn't solve the issue, it still produced the same error.

glenn-jocher commented 1 year ago

@lwensveen,

I apologize for the confusion. It seems that my previous suggestion did not resolve the issue. Upon further analysis, it appears that the error occurs when result.boxes is None, and you are trying to access the id attribute of result.boxes using result.boxes.id.cpu().numpy().astype(int).

To prevent this error, you need to modify your continue logic to include a check for the id attribute of result.boxes as well. You can do this by updating your existing if statement as follows:

if result.boxes is None or result.boxes.id is None:

This updated check ensures that both result.boxes and result.boxes.id are not None before accessing the cpu() method.

I apologize for any confusion caused by my previous response. Please try implementing this updated logic, and let me know if you encounter any further issues.

Thank you for your patience.

github-actions[bot] commented 1 year ago

šŸ‘‹ Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO šŸš€ and Vision AI ā­

TUday1998 commented 1 year ago

It worked for me.

from collections import defaultdict import cv2 import numpy as np

from ultralytics import YOLO

Load the YOLOv8 model

model = YOLO('model_name.pt')

Open the video file

video_path = "" cap = cv2.VideoCapture(video_path)

Store the track history

track_history = defaultdict(lambda: [])

Loop through the video frames

while cap.isOpened():

Read a frame from the video

success, frame = cap.read()

if success:
    # Run YOLOv8 tracking on the frame, persisting tracks between frames
    results = model.track(frame, persist=True)
    for result in results:
        if result.boxes is None or result.boxes.id is None:
            continue
        # Get the boxes and track IDs
        else:
            boxes = result.boxes.xywh.cpu()
            track_ids = result.boxes.id.cpu().numpy().astype(int)

            # Visualize the results on the frame
            annotated_frame = result.plot()

            # Plot the tracks
            for box, track_id in zip(boxes, track_ids):
                x, y, w, h = box
                track = track_history[track_id]
                track.append((float(x), float(y)))  # x, y center point
                if len(track) > 30:  # retain 90 tracks for 90 frames
                    track.pop(0)

                # Draw the tracking lines
                points = np.hstack(track).astype(np.int32).reshape((-1, 1, 2))
                cv2.polylines(annotated_frame, [points], isClosed=False, color=(0, 255, 0), thickness=10)

            # Display the annotated frame
            cv2.imshow("YOLOv8 Tracking", annotated_frame)

    # Break the loop if 'q' is pressed
    if cv2.waitKey(1) & 0xFF == ord("q"):
        break
else:
    # Break the loop if the end of the video is reached
    break

Release the video capture object and close the display window

cap.release() cv2.destroyAllWindows()

glenn-jocher commented 1 year ago

@TUday1998 hello!

From the provided code block, we can see that you're using the YOLOv8 model to track objects in a video file. You're persisting the tracking between frames and storing the track history. Only those results where a box and track ID successfully exist are considered.

The 'xywh.cpu()' method is used to derive a tensor containing bounding boxes, and track IDs are transformed to a CPU-placed, integer NumPy array. Afterwards, these are used to plot the tracks and draw tracking lines directly on the frame.

Your results are visualized with the '.plot()' method, and you let the tracking be displayed until the video finishes running or the user manually ends it.

As long as your specified model runs properly, the video file path is correct and your software environment is correctly configured, the code should work as expected. It provides a great example of tracking and visualizing object trajectories in a video using YOLOv8.

Let us know if any further clarification is needed. Thank you for sharing your work with us!

github-actions[bot] commented 12 months ago

šŸ‘‹ Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO šŸš€ and Vision AI ā­