roboflow / supervision

We write your reusable computer vision tools. 💜
https://supervision.roboflow.com
MIT License
18.54k stars 1.44k forks source link

Create new cookbook for utilizing supervision methods to easily create YOLO datasets for training #1388

Open xaristeidou opened 1 month ago

xaristeidou commented 1 month ago

Search before asking

Description

I find myself creating dataset structures and split in train, valid, test and images, labels folders multiple times. The whole process could easily be automated.

Use case

There are methods that currently exist to load a dataset sv.DetectionDataset.from_yolo(), split in selected ratio sv.DetectionDataset.split() and export to YOLO format sv.DetectionDataset.to_yolo().

Nevertheless, in creation of YOLO model training dataset structure, someone must write a custom split in train/valid/test (because split() is able to split only in two parts), and also create manually the train/valid/test folders needed for the preparation of the dataset. (As of my knowledge ultralytics YOLO models require by default to have train/valid folders that contain valid and not empty annotations, test folder can be empty).

For that reason I propose a new method to be added in sv.DetectionDataset which will combine the arguments of from_yolo(), split(), to_yolo() and will run the whole backend for creating train/valid/test folder and images/labels subfolders along with data.yaml file.

At this current point I have developed an implementation of such method which provides the ability to the user to create a YOLO dataset structure with a single line of code. An example of executing such a process can be seen in the following example:

import supervision as sv

dataset_directory = "/path/to/directory"

sv.DetectionDataset.create_yolo_dataset(
    images_directory_path=f"{dataset_directory}/images",
    annotations_directory_path=f"{dataset_directory}/labels",
    data_yaml_path=f"{dataset_directory}/data.yaml",
    train_ratio=0.7,
    valid_ratio=0.15,
    folders_export_path=f"{dataset_directory}",
    data_yaml_export_path = f"{dataset_directory}/data.yaml",
)

Additional

Let me know if you like this idea, and if you want to submit a PR with the initial implementation.

Are you willing to submit a PR?

SkalskiP commented 1 month ago

Cześć @xaristeidou 👋🏻

To be honest, I would really prefer not to treat YOLO differently than other data formats. The Supervision API aims to provide reusable building blocks like sv.DetectionDataset.split or sv.DetectionDataset.as_yolo that you can compose together. To be honest, that sounds like the expected usage of supervision.

xaristeidou commented 1 month ago

@SkalskiP That is a fact, I was thinking about that it is "too much" automation. Maybe I could create a cookbook similar to 'Serialise Detections to a CSV File' and 'Serialise Detections to a JSON File', guiding and combining the aforementioned methods to construct a YOLO dataset easily.

SkalskiP commented 1 month ago

I think the cookbook makes a lot more sense. We also released this how-to guide last week. MAybe you could reuse some of those code snippet in your cookbook?

xaristeidou commented 1 month ago

Yeah sure!

SkalskiP commented 1 month ago

@xaristeidou should I expect cookbook PR? ;)

xaristeidou commented 1 month ago

@SkalskiP Yes, but not immediately, i don't have it ready now. I will work for it mostly this weekend.

xaristeidou commented 1 month ago

@SkalskiP I have ready the cookbook. I think we use first proceed with PR #1422 , so we can test the notebook properly, because in the final stage I run model.train() and it raises an error due to not proper data.yaml format.