Open Aamnastressed2 opened 1 week ago
👋 Hello @Aamnastressed2, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:
If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix.
If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.
We try to respond to all issues as promptly as possible. Thank you for your patience!
Hi there,
Thank you for reaching out and providing detailed information about your issue. It seems like you're encountering an attribute error when using the best.pt
model file. This error typically occurs when the results
object is None
, which could be due to several reasons.
To help us diagnose the issue more effectively, could you please provide a minimum reproducible example? This will allow us to better understand the context and pinpoint the problem. You can find more information on how to create a minimum reproducible example here.
In the meantime, here are a few steps you can take to troubleshoot the issue:
Verify Package Versions: Ensure that you are using the latest versions of the Ultralytics packages. Sometimes, bugs are fixed in newer releases, so updating might resolve your issue.
Check Model Compatibility: The best.pt
file might have been trained with a different configuration or version. Ensure that the model file is compatible with the current version of the Ultralytics package you are using.
Debugging the Results: Add a check to see if results
is None
before accessing its attributes. This can help you identify if the model is failing to produce results for some reason.
Here's a modified snippet of your code with an added check:
# Perform object detection and tracking using YOLO model
results = model.track(im0, persist=True)
if results and results[0].boxes:
boxes = results[0].boxes.xyxy.cpu()
if results[0].boxes.id is not None:
# Get the track IDs
track_ids = results[0].boxes.id.int().cpu().tolist()
# Loop through detected objects and their track IDs
for box, track_id in zip(boxes, track_ids):
# Annotate bounding boxes and track IDs on the frame
# annotator.box_label(box, label=str(track_id), color=bbox_clr)
# annotator.visioneye(box, center_point)
# Calculate the height of the bounding box in pixels
pixel_height = int(box[3] - box[1])
# Calculate the distance to the object
distance = calculate_distance(actual_height_meters, focal_length_pixels, pixel_height)
# Draw bounding box and distance label
annotator.box_label(box, label=f"Distance: {distance:.2f} m", color=(255, 255, 50))
else:
print("No detections were made.")
This check ensures that you only proceed if results
is not None
and contains valid boxes
.
Please try these steps and let us know if the issue persists. Your collaboration helps improve the YOLO community and the Ultralytics team. 😊
Search before asking
Question
i trained the dotav8 model using ultralytics hub google colab option, the model was trained successfully on colab but when i used the same weight file in my code i got 'NoneType' object has no attribute 'xyxy', whereas my code was running fine for yolov8s.pt file but when i ran the same code for the best.pt, it gave me error. why is it? i have also added the script file in additional info.
Additional
from ultralytics import YOLO import cv2 import math import serial import time from ultralytics.solutions import distance_calculation from ultralytics.utils.plotting import Annotator, colors
load yolov8 model
model = YOLO('best.pt') video_path = 'height.mp4' cap = cv2.VideoCapture(video_path)
cap=cv2.VideoCapture(0)
assert cap.isOpened(), "Error opening video stream or file"
Get video properties: width, height, and frames per second
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
Create VideoWriter object to save the processed video
out = cv2.VideoWriter('visioneye-distance-calculation.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (w, h))
Define the center point of the vision eye and pixels per meter
center_point = (0, h) pixel_per_meter = 852
Known height of the object in meters (e.g., a bottle)
actual_height_meters = 0.25 # Example: 25 cm bottle height
Camera focal length in pixels (this value needs to be calibrated for your camera)
focal_length_pixels = 400 # Example value, this needs to be calibrated for your camera
Function to calculate the distance from the camera to the object
def calculate_distance(actual_height, focal_length, pixel_height): return (actual_height * focal_length) / pixel_height
Define colors for text, text background, and bounding box
txt_color, txt_background, bbox_clr = ((0, 0, 0), (255, 255, 255), (255, 0, 255))
Initialize serial port
SERIAL_PORT = 'COM3' # Change as needed
BAUD_RATE = 9600
def initialize_serial(port, baud_rate):
ser = serial.Serial(port, baud_rate, timeout=1)
time.sleep(2) # Wait for the serial connection to initialize
return ser
def send_serial_data(serial_connection, data):
if serial_connection.is_open:
print(f"Sending data: {data}")
serial_connection.write(data.encode())
serial_connection = initialize_serial(SERIAL_PORT, BAUD_RATE)
Main loop for processing each frame of the video
while True:
Read a frame from the video
Release video capture and video writer objects
out.release() cap.release()
Close serial connection
serial_connection.close()
Close all OpenCV windows
cv2.destroyAllWindows()