Closed pedbrgs closed 3 months ago
Hi, @pedbrgs ππ» Thanks a lot for your interest in Supervision.
That's because, at the moment, DetectionDataset.from_yolo
does not support OBB (Oriented Bounding Boxes).
It would be a good idea to convert your question into a feature request and add support for OBB. LEt's do it!
@SkalskiP Thanks for considering this! It will be great to have this feature.
Hi, @pedbrgs ππ» Fingers crossed, someone from the community. will pick it up.
Hey @SkalskiP , I wanted to drop a quick note to inform you that I'm exploring extending the support for OBB datasets. While reviewing the 'load_annotation_yolo' function, I noticed the necessity of adding a boolean parameter to handle Oriented Bounding Boxes efficiently. I'll keep you posted on the progress of my enhancements. Feel free to share any thoughts or suggestions you might have.
Hi, @nabeelnazeer ππ» Should I assign this ticket to you?
Sure, go ahead @SkalskiP, I will see what I can do.
Hi @nabeelnazeer, are you currently working on this issue? If not, can I start working?
Sure you may... Just ping me if you need any advice or doubts on this one... Iam in the middle of a new project now.. I got caught up with it..go ahead @Bhavay-2001
@Bhavay-2001 do you want to take this task?
Yes. I will start my work and tag you along
@Bhavay-2001 awesome! I'll assign this task to you ;)
Hi @SkalskiP @nabeelnazeer, can you guys pls provide a sample code which I can run and check how is the code working? Thanks
Hi @nabeelnazeer @SkalskiP, I checked the code and from an overview it seems that the main changes are needed to be done in this function. The from_yolo
function just calls another function and everything boils down to this. A quick code sample to run may produce insights.
Would like to discuss this with you guys.
@Bhavay-2001 yup, that's the function you need to update. What code would you need?
Hi @SkalskiP, a code sample to run the output and check the annotations. Basically, I want to check how the annotations show soo that I can find where do I need to make changes too in the code.
Or any idea how can I create a small sample code on which I can run and test this OBB?
Hi @Bhavay-2001 :wave:
I haven't tried it myself, but this may work.
import cv2
import supervision as sv
from ultralytics import YOLO
model = YOLO("yolov8n-obb.pt")
image = cv2.imread(<SOURCE_IMAGE_PATH>)
results = model(image)[0]
detections = sv.Detections.from_ultralytics(results)
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)
cv2.imshow(annotated_image)
Hi @LinasKo, I think the feature is to add OBB Format annotations. Soo, basically I want to check what kind of annotations does from_yolo
return soo that I can make changes to it.
Could you please tell how can I prepare a small dataset which I use to check this? Like how can i add images and annotations and data_yaml?
I see. I believe you can find the YOLO format here: https://docs.ultralytics.com/datasets/detect/#ultralytics-yolo-format
As for the YOLO OBB annotations, they're defined here https://docs.ultralytics.com/datasets/obb/#yolo-obb-format as class_index, x1, y1, x2, y2, x3, y3, x4, y4
.
Does that make more sense? You should be able to verify by running Ultralytics, doing a little bit of training.
@SkalskiP, do you know if we can use https://docs.ultralytics.com/datasets/obb/dota8/#introduction?
Hi @LinasKo @SkalskiP, just one more help. The from_yolo
function runs the load_yolo_annotations
in the backend which returns annotations
. Can you please tell what does this annotations represent? Basically, I want to know the shape or the value of this annotations
.
Once I have that, there is a method in the ultralytics library to convert data into the Yolo OBB
format which I can discuss further.
Thanks
Try running the load_yolo_annotations
, see what happens, see if you can follow your intuition for what it would do. I sense you're on the right track!
If you make a PR, we can adjust it later if your assumptions prove to be slightly incorrect. :slightly_smiling_face:
Tuple[List[str], Dict[str, np.ndarray], Dict[str, Detections]]
is the return type, so it should return N
names, dict with M x (w x h x c)
images, dict with M
detections.
Hi @LinasKo, yes I think I would get an idea by running the load_yolo_annotations
function. But for that, I think I need a dataset in the format which is compatible with the function.
Like in this example
train_ds = sv.DetectionDataset.from_yolo(
images_directory_path=f"{dataset.location}/train/images",
annotations_directory_path=f"{dataset.location}/train/labels",
data_yaml_path=f"{dataset.location}/data.yaml"
)
Can you suggest any dataset which I can load using this function? Thanks
For that one, you can make a small example yourself with the format description provided.
@SkalskiP, do you know if we can use https://docs.ultralytics.com/datasets/obb/dota8/#introduction?
Alright. I will check and create one and do some research and get back to you.
I took a look at Roboflow Universe, but it looks like there is no easy way to search it for OBB datasets.
Hi @SkalskiP, will I be able to make a small dataset using RoboFlow website in the below format?
train_ds = sv.DetectionDataset.from_yolo(
images_directory_path=f"{dataset.location}/train/images",
annotations_directory_path=f"{dataset.location}/train/labels",
data_yaml_path=f"{dataset.location}/data.yaml"
)
Like I can have images, labels and data.yaml file? I have never prepared one.
Hi @LinasKo @SkalskiP, I have tried running a sample code on this fashion assistant dataset only and I have a few things which I want to ask:
train
folder which has both images
and labels
.
The labels are in the format class id, v1, v2, ..... v8
. What does this label represents? What are the 8 values?Because out of these 8 values, we take out 2 min
and 2 max
values and those are further multiplied with resolution_wh
to calculate xyxy.
If these 8 values are the corresponding x
and y
coordinates that we want for the yolo-obb format, then we can skip that code where we calculate min
and max
values.
Thanks
Here is the documentation for YOLO-OBB format: https://docs.ultralytics.com/datasets/obb/. Looks like each line of text file is organized this way class_index, x1, y1, x2, y2, x3, y3, x4, y4
. Each x
and y
value is normalized so it looks for example like this: 0 0.780811 0.743961 0.782371 0.74686 0.777691 0.752174 0.776131 0.749758
. To load it you'd need to multiply all x
values by image width and each y
value by image height.
Yes, because in the current code it calculates min
and max
values from all the labels. Soo I think in order to load in OBB format, we don't want to calculate that and instead we can just calculate normalized values.
I will open a PR and tag you to check if I did it right. Thanks
As far as I know, the current version of the code does not support OBB. When you say "current code calculates min and max values from all the labels," could you specify which line of code you are referring to?
What I meant was that this line reshapes the annotations to a 4x2 matrix in case we have 8 numbers.
And this line calculates the min
and max
values from the 4x2 matrix.
What I am suggesting is that maybe we can add a parameter which when set True can simply multiply the annotations with width
and height
and return it.
Description
In supervision-0.18.0, we added initial support for OBB; it's time to extend it to include dataset loading.
Make the necessary changes in sv.DetectionDataset.from_yolo to enable loading OBB datasets from disk in YOLO format. Here you can read more about the YOLO OBB Format. In short, each line of the
.txt
file should have the following format.The sv.OrientedBoxAnnotator expects information about oriented bounding boxes to be stored in the
xyxyxyxy
field ofsv.Detections.data
. Ensure that the information loaded from the dataset is stored there.API
Here's an example of how to use the new API. Roboflow allows for the export of segmentation datasets as OBB. Let's ensure that our support for OBB definitely works with datasets exported from Roboflow.
Additional