roboflow / supervision

We write your reusable computer vision tools. πŸ’œ
https://supervision.roboflow.com
MIT License
22.47k stars 1.68k forks source link

[DetectionDataset] - expand `from_yolo` to include support for OBB (Oriented Bounding Boxes) #1096

Closed pedbrgs closed 3 months ago

pedbrgs commented 5 months ago

Description

In supervision-0.18.0, we added initial support for OBB; it's time to extend it to include dataset loading.

Make the necessary changes in sv.DetectionDataset.from_yolo to enable loading OBB datasets from disk in YOLO format. Here you can read more about the YOLO OBB Format. In short, each line of the .txt file should have the following format.

class_index, x1, y1, x2, y2, x3, y3, x4, y4

The sv.OrientedBoxAnnotator expects information about oriented bounding boxes to be stored in the xyxyxyxy field of sv.Detections.data. Ensure that the information loaded from the dataset is stored there.

API

Here's an example of how to use the new API. Roboflow allows for the export of segmentation datasets as OBB. Let's ensure that our support for OBB definitely works with datasets exported from Roboflow.

import random
import roboflow
from roboflow import Roboflow
import supervision as sv

roboflow.login()
rf = Roboflow()

project = rf.workspace("roboflow-jvuqo").project("fashion-assistant")
version = project.version(3)
dataset = version.download("yolov8-obb")

train_ds = sv.DetectionDataset.from_yolo(
    images_directory_path=f"{dataset.location}/train/images",
    annotations_directory_path=f"{dataset.location}/train/labels",
    data_yaml_path=f"{dataset.location}/data.yaml"
)

image_name = random.choice(list(train_ds.images))
image = train_data.images[image_name]
detections = train_data.annotations[image_name]

oriented_box_annotator = sv.OrientedBoxAnnotator()
annotated_frame = oriented_box_annotator.annotate(
    scene=image.copy(),
    detections=detections
)

Additional

SkalskiP commented 5 months ago

Hi, @pedbrgs πŸ‘‹πŸ» Thanks a lot for your interest in Supervision.

That's because, at the moment, DetectionDataset.from_yolo does not support OBB (Oriented Bounding Boxes).

It would be a good idea to convert your question into a feature request and add support for OBB. LEt's do it!

pedbrgs commented 5 months ago

@SkalskiP Thanks for considering this! It will be great to have this feature.

SkalskiP commented 5 months ago

Hi, @pedbrgs πŸ‘‹πŸ» Fingers crossed, someone from the community. will pick it up.

nabeelnazeer commented 5 months ago

Hey @SkalskiP , I wanted to drop a quick note to inform you that I'm exploring extending the support for OBB datasets. While reviewing the 'load_annotation_yolo' function, I noticed the necessity of adding a boolean parameter to handle Oriented Bounding Boxes efficiently. I'll keep you posted on the progress of my enhancements. Feel free to share any thoughts or suggestions you might have.

SkalskiP commented 5 months ago

Hi, @nabeelnazeer πŸ‘‹πŸ» Should I assign this ticket to you?

nabeelnazeer commented 5 months ago

Sure, go ahead @SkalskiP, I will see what I can do.

Bhavay-2001 commented 4 months ago

Hi @nabeelnazeer, are you currently working on this issue? If not, can I start working?

nabeelnazeer commented 4 months ago

Sure you may... Just ping me if you need any advice or doubts on this one... Iam in the middle of a new project now.. I got caught up with it..go ahead @Bhavay-2001

SkalskiP commented 4 months ago

@Bhavay-2001 do you want to take this task?

Bhavay-2001 commented 4 months ago

Yes. I will start my work and tag you along

SkalskiP commented 4 months ago

@Bhavay-2001 awesome! I'll assign this task to you ;)

Bhavay-2001 commented 4 months ago

Hi @SkalskiP @nabeelnazeer, can you guys pls provide a sample code which I can run and check how is the code working? Thanks

Bhavay-2001 commented 4 months ago

Hi @nabeelnazeer @SkalskiP, I checked the code and from an overview it seems that the main changes are needed to be done in this function. The from_yolo function just calls another function and everything boils down to this. A quick code sample to run may produce insights.

Would like to discuss this with you guys.

SkalskiP commented 4 months ago

@Bhavay-2001 yup, that's the function you need to update. What code would you need?

Bhavay-2001 commented 4 months ago

Hi @SkalskiP, a code sample to run the output and check the annotations. Basically, I want to check how the annotations show soo that I can find where do I need to make changes too in the code.

Or any idea how can I create a small sample code on which I can run and test this OBB?

LinasKo commented 4 months ago

Hi @Bhavay-2001 :wave:

I haven't tried it myself, but this may work.

import cv2
import supervision as sv
from ultralytics import YOLO

model = YOLO("yolov8n-obb.pt")
image = cv2.imread(<SOURCE_IMAGE_PATH>)
results = model(image)[0]
detections = sv.Detections.from_ultralytics(results)

bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()

annotated_image = bounding_box_annotator.annotate(
    scene=image, detections=detections)
annotated_image = label_annotator.annotate(
    scene=annotated_image, detections=detections)

cv2.imshow(annotated_image)
Bhavay-2001 commented 4 months ago

Hi @LinasKo, I think the feature is to add OBB Format annotations. Soo, basically I want to check what kind of annotations does from_yolo return soo that I can make changes to it.

Could you please tell how can I prepare a small dataset which I use to check this? Like how can i add images and annotations and data_yaml?

LinasKo commented 4 months ago

I see. I believe you can find the YOLO format here: https://docs.ultralytics.com/datasets/detect/#ultralytics-yolo-format As for the YOLO OBB annotations, they're defined here https://docs.ultralytics.com/datasets/obb/#yolo-obb-format as class_index, x1, y1, x2, y2, x3, y3, x4, y4.

Does that make more sense? You should be able to verify by running Ultralytics, doing a little bit of training.

@SkalskiP, do you know if we can use https://docs.ultralytics.com/datasets/obb/dota8/#introduction?

Bhavay-2001 commented 4 months ago

Hi @LinasKo @SkalskiP, just one more help. The from_yolo function runs the load_yolo_annotations in the backend which returns annotations. Can you please tell what does this annotations represent? Basically, I want to know the shape or the value of this annotations.

Once I have that, there is a method in the ultralytics library to convert data into the Yolo OBB format which I can discuss further. Thanks

LinasKo commented 4 months ago

Try running the load_yolo_annotations, see what happens, see if you can follow your intuition for what it would do. I sense you're on the right track!

If you make a PR, we can adjust it later if your assumptions prove to be slightly incorrect. :slightly_smiling_face:

Tuple[List[str], Dict[str, np.ndarray], Dict[str, Detections]] is the return type, so it should return N names, dict with M x (w x h x c) images, dict with M detections.

Bhavay-2001 commented 4 months ago

Hi @LinasKo, yes I think I would get an idea by running the load_yolo_annotations function. But for that, I think I need a dataset in the format which is compatible with the function.

Like in this example

train_ds = sv.DetectionDataset.from_yolo(
    images_directory_path=f"{dataset.location}/train/images",
    annotations_directory_path=f"{dataset.location}/train/labels",
    data_yaml_path=f"{dataset.location}/data.yaml"
)

Can you suggest any dataset which I can load using this function? Thanks

LinasKo commented 4 months ago

For that one, you can make a small example yourself with the format description provided.

@SkalskiP, do you know if we can use https://docs.ultralytics.com/datasets/obb/dota8/#introduction?

Bhavay-2001 commented 4 months ago

Alright. I will check and create one and do some research and get back to you.

SkalskiP commented 4 months ago

I took a look at Roboflow Universe, but it looks like there is no easy way to search it for OBB datasets.

Bhavay-2001 commented 4 months ago

Hi @SkalskiP, will I be able to make a small dataset using RoboFlow website in the below format?

train_ds = sv.DetectionDataset.from_yolo(
    images_directory_path=f"{dataset.location}/train/images",
    annotations_directory_path=f"{dataset.location}/train/labels",
    data_yaml_path=f"{dataset.location}/data.yaml"
)

Like I can have images, labels and data.yaml file? I have never prepared one.

Bhavay-2001 commented 4 months ago

Hi @LinasKo @SkalskiP, I have tried running a sample code on this fashion assistant dataset only and I have a few things which I want to ask:

  1. When I download the dataset on my local machine, I get a train folder which has both images and labels. The labels are in the format class id, v1, v2, ..... v8. What does this label represents? What are the 8 values?

Because out of these 8 values, we take out 2 min and 2 max values and those are further multiplied with resolution_wh to calculate xyxy.

Bhavay-2001 commented 4 months ago

If these 8 values are the corresponding x and y coordinates that we want for the yolo-obb format, then we can skip that code where we calculate min and max values. Thanks

SkalskiP commented 4 months ago

Here is the documentation for YOLO-OBB format: https://docs.ultralytics.com/datasets/obb/. Looks like each line of text file is organized this way class_index, x1, y1, x2, y2, x3, y3, x4, y4. Each x and y value is normalized so it looks for example like this: 0 0.780811 0.743961 0.782371 0.74686 0.777691 0.752174 0.776131 0.749758. To load it you'd need to multiply all x values by image width and each y value by image height.

Bhavay-2001 commented 4 months ago

Yes, because in the current code it calculates min and max values from all the labels. Soo I think in order to load in OBB format, we don't want to calculate that and instead we can just calculate normalized values.

I will open a PR and tag you to check if I did it right. Thanks

SkalskiP commented 4 months ago

As far as I know, the current version of the code does not support OBB. When you say "current code calculates min and max values from all the labels," could you specify which line of code you are referring to?

Bhavay-2001 commented 4 months ago

What I meant was that this line reshapes the annotations to a 4x2 matrix in case we have 8 numbers. And this line calculates the min and max values from the 4x2 matrix.

What I am suggesting is that maybe we can add a parameter which when set True can simply multiply the annotations with width and height and return it.