Closed nadaakm closed 5 months ago
π Hello @nadaakm, thank you for your interest in Ultralytics YOLOv8 π! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a π Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training β Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord π§ community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
@nadaakm hi there! π To obtain the confidence score for each detection in your test images, you can simply access the conf
attribute from the results
objects generated by the predict command. Here's a modified snippet of your code to show how you might print out the confidence scores for each detected object in every test image:
from autodistill_yolov8 import YOLOv8
# Load your trained model
model = YOLOv8.load("path/to/your/trained/model.pt")
for cropped_image_path in list_cropped_image_paths:
# Run prediction
results = model.predict(cropped_image_path)
# Loop through results (assuming results is a list of detection objects)
for detection in results.xyxy[0]: # results.xyxy is a list containing a tensor for each image, assuming one image for now
print(f"Confidence: {detection[4].item():.2f}")
Make sure to replace "path/to/your/trained/model.pt"
with the actual path to your trained model. The key part here is detection[4]
, which accesses the confidence score of each detected object (assuming results are in xyxy format, where the 5th element is the confidence). Let me know if you need any more assistance. Good luck! π
Hey @glenn-jocher,
I encountered an issue while working with YOLOv8 for my custom dataset. Initially, I used YOLOv8 for object detection on my custom dataset, and everything seemed to work fine. Here's the initial code snippet I used:
from autodistill_yolov8 import YOLOv8
# Load your trained model
model = YOLOv8("/content/drive/MyDrive/planogram-extraction/runs/detect/train2/weights/best.pt")
for cropped_image_path in cropped_image_paths:
# Run prediction
results = model.predict(cropped_image_path)
This code provided me with the following output:
image 1/1 /content/cropped_images/cropped_image1.jpg: 640x192 1 coca, 137.4ms
Speed: 1.4ms preprocess, 137.4ms inference, 1.3ms postprocess per image at shape (1, 3, 640, 192)
However, when I attempted to implement some additional functionality based on the results, I encountered an AttributeError
. Here's the code snippet I tried:
for detection in results[0].xyxy: # results.xyxy is a list containing a tensor for each image, assuming one image for now
print(f"Confidence: {detection[4].item():.2f}")
And the error message I received was:
AttributeError: 'Results' object has no attribute 'xyxy'. See valid attributes below.
A class for storing and manipulating inference results.
Attributes:
orig_img (numpy.ndarray): Original image as a numpy array.
orig_shape (tuple): Original image shape in (height, width) format.
Attributes:
orig_img (numpy.ndarray): Original image as a numpy array.
orig_shape (tuple): Original image shape in (height, width) format.
boxes (Boxes, optional): Object containing detection bounding boxes.
masks (Masks, optional): Object containing detection masks.
probs (Probs, optional): Object containing class probabilities for classification tasks.
keypoints (Keypoints, optional): Object containing detected keypoints for each object.
speed (dict): Dictionary of preprocess, inference, and postprocess speeds (ms/image).
names (dict): Dictionary of class names.
path (str): Path to the image file.
Methods:
update(boxes=None, masks=None, probs=None, obb=None): Updates object attributes with new detection results.
cpu(): Returns a copy of the Results object with all tensors on CPU memory.
numpy(): Returns a copy of the Results object with all tensors as numpy arrays.
cuda(): Returns a copy of the Results object with all tensors on GPU memory.
to(*args, **kwargs): Returns a copy of the Results object with tensors on a specified device and dtype.
new(): Returns a new Results object with the same image, path, and names.
plot(...): Plots detection results on an input image, returning an annotated image.
show(): Show annotated results to screen.
save(filename): Save annotated results to file.
verbose(): Returns a log string for each task, detailing detections and classifications.
save_txt(txt_file, save_conf=False): Saves detection results to a text file.
save_crop(save_dir, file_name=Path("im.jpg")): Saves cropped detection images.
tojson(normalize=False): Converts detection results to JSON format.
It seems that the Results
object returned by YOLOv8 does not have the xyxy
attribute as I expected. Can you help me understand how I can access the bounding box coordinates and the confidence for each croped image using the Results
object?
Thanks in advance for your assistance!
Hey @nadaakm,
It looks like you're almost there! To access the bounding box coordinates and confidence scores from the Results
object in YOLOv8, you can use the .boxes
attribute, which contains the detected bounding boxes. Each Box
object within .boxes
has attributes like .xyxy
for coordinates, .conf
for confidence scores, and .cls
for class IDs. Hereβs how you can modify your code snippet:
# Assuming you have results from model.predict()
if results.boxes is not None: # Check if any boxes are detected
for box in results.boxes:
print(f"Coordinates: {box.xyxy}, Confidence: {box.conf.item():.2f}")
This should give you the bounding box coordinates and confidence scores for each detection. π
If you have further questions or encounter more issues, feel free to ask. Happy coding! π
Hey @glenn-jocher, thanks for your response. I have another problem; I want to extract the path from the output. Let me show you my code:
from autodistill_yolov8 import YOLOv8
from ultralytics import YOLO
# Load your trained model
model = YOLO("/content/drive/MyDrive/planogram-extraction/runs/detect/train2/weights/best.pt")
for cropped_image_path in cropped_image_paths:
# Run prediction
results = model.predict(cropped_image_path, save_txt=True, save_conf=True, conf=0.79)
print(results)
and the output :
image 1/1 /content/cropped_images/cropped_image1.jpg: 640x192 1 coca, 72.0ms
Speed: 3.8ms preprocess, 72.0ms inference, 1.1ms postprocess per image at shape (1, 3, 640, 192)
[ultralytics.engine.results.Results object with attributes:boxes: ultralytics.engine.results.Boxes objectkeypoints: Nonemasks: Nonenames: {0: 'coca'}obb: Noneorig_img: array([[[186, 182, 207],
[183, 178, 205],
[182, 176, 207],
...,
[ 84, 81, 120],
[ 82, 78, 114],
[ 80, 76, 112]],
[[185, 181, 206],
[182, 177, 204],
[181, 175, 206],
...,
[ 84, 82, 118],
[ 82, 78, 114],
[ 80, 76, 112]],
[[184, 180, 205],
[181, 176, 203],
[180, 175, 204],
...,
[ 81, 81, 117],
[ 80, 78, 114],
[ 78, 77, 111]],
...,
[[163, 155, 172],
[162, 154, 171],
[150, 142, 159],
...,
[170, 162, 173],
[159, 151, 162],
[156, 148, 159]],
[[175, 168, 183],
[148, 141, 154],
[123, 116, 131],
...,
[230, 224, 235],
[211, 204, 217],
[198, 191, 204]],
[[176, 169, 182],
[173, 167, 178],
[177, 170, 183],
...,
[237, 233, 245],
[229, 224, 239],
[220, 215, 230]]], dtype=uint8)
orig_shape: (155, 46)
path: '/content/cropped_images/cropped_image1.jpg'
probs: None
save_dir: 'runs/detect/predict3'
speed: {'preprocess': 3.8127899169921875, 'inference': 72.03269004821777, 'postprocess': 1.0662078857421875}]
How can I extract the path of the images that conf is >= 0.8 from the output? Is it possible?
thnx!
@nadaakm hey there! π Sure, I can help you with that. To filter and extract paths of images where the confidence level is >= 0.8, you'll need to process the boxes
property of the Results
object. Here's a quick example on how you can do it:
extracted_paths = []
for cropped_image_path in cropped_image_paths:
# Run prediction
results = model.predict(cropped_image_path, conf=0.79)
# Check if any boxes meet the confidence threshold
if results.boxes and any(box.conf >= 0.8 for box in results.boxes):
extracted_paths.append(results.path)
# Now extracted_paths will contain the paths of images meeting the criteria
print("Paths of images with conf >= 0.8:", extracted_paths)
This will collect the paths of those images where at least one detected object meets your confidence threshold. Hope this is what you were looking for! Let me know if you have any more questions. π
Hey @glenn-jocher, thanks for your help. I've used the code that you provided with a little modification and it works:
extracted_paths = []
for cropped_image_path in cropped_image_paths:
# Run prediction
results = model.predict(cropped_image_path, conf=0.6)
for result in results:
if result.boxes and any(box.conf >= 0.6 for box in result.boxes):
extracted_paths.append(result.path)
# Now extracted_paths will contain the paths of images meeting the criteria
print("Paths of images with conf >= 0.6:", extracted_paths)
I want to have the confidence for each cropped image. Should I use the probs attribute? If it's true, can you help me to have the confidence of each cropped image in a list?
@nadaakm hey there! π Great to hear that the code is working for you! To get the confidence for each cropped image, you actually don't need to use the probs
attribute. probs
is primarily used for classification tasks. For object detection tasks like yours, the boxes
attribute contains the confidence scores within each Box
object.
Here's how you can modify your code to get a list of confidences for detections in each image:
confidences_list = []
for cropped_image_path in cropped_image_paths:
# Run prediction
results = model.predict(cropped_image_path, conf=0.6)
confidences = [box.conf.item() for box in results.boxes if box.conf >= 0.6] # Assuming results from one image
confidences_list.append(confidences)
print("Confidences for each cropped image:", confidences_list)
This will give you a list of lists, where each sublist contains the confidences of detections in a particular cropped image that meet your threshold. π
Let me know if you need any more help!
π Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO π and Vision AI β
Search before asking
Description
I want to print confidence for every test image how to do i?
here is my code :
**from autodistill_yolov8 import YOLOv8
target_model = YOLOv8("yolov8n.pt") target_model.train(DATA_YAML_PATH, epochs=50)**
for cropped_image_path in list_cropped_image_paths:
Use case
No response
Additional
No response
Are you willing to submit a PR?