EscVM / OIDv4_ToolKit

Download and visualize single or multiple classes from the huge Open Images v4 dataset
GNU General Public License v3.0
809 stars 635 forks source link

can't open file 'convert_annotations.py' #85

Open Juuustin opened 4 years ago

Juuustin commented 4 years ago

Hi, is there a file called 'convert_annotations.py' to convert the annotation into yolo form? I didn't find it. Thank you:)

rondinellimorais commented 3 years ago

The files class-descriptions-boxable.csv and train-annotations-bbox.csv contains all you need to create darknet annotation.

here's what these files look like

train-annotations-bbox.csv

class-descriptions-boxable.csv

So, using pandas you can filter the dataframe and search for ImageID (jpg file name) and LabelName (class id) and get the values XMin, XMax, YMin, YMax.

All you need to do now is:

x_min = ...
x_max = ...
y_min = ...
y_max = ...
class_name_order_index = ...

x_values = [float(x_min), float(x_max)]
y_values = [float(y_min), float(y_max)]

center_x = (x_values[1] + x_values[0]) / 2
center_y = (y_values[1] + y_values[0]) / 2

w = x_values[1] - x_values[0]
h = y_values[1] - y_values[0]

print("{} {} {} {} {}".format(class_name_order_index, center_x, center_y, w, h))

# output
# 1 0.44781249999999995 0.775 0.45187499999999997 0.313334

here's the complete code that i use in my projects OIDv4+YOLOAnnotation.ipynb

I hope to help someone

Charikshith commented 3 years ago

The files class-descriptions-boxable.csv and train-annotations-bbox.csv contains all you need to create darknet annotation.

here's what these files look like

train-annotations-bbox.csv

class-descriptions-boxable.csv

So, using pandas you can filter the dataframe and search for ImageID (jpg file name) and LabelName (class id) and get the values XMin, XMax, YMin, YMax.

All you need to do now is:

x_min = ...
x_max = ...
y_min = ...
y_max = ...
class_name_order_index = ...

x_values = [float(x_min), float(x_max)]
y_values = [float(y_min), float(y_max)]

center_x = (x_values[1] + x_values[0]) / 2
center_y = (y_values[1] + y_values[0]) / 2

w = x_values[1] - x_values[0]
h = y_values[1] - y_values[0]

print("{} {} {} {} {}".format(class_name_order_index, center_x, center_y, w, h))

# output
# 1 0.44781249999999995 0.775 0.45187499999999997 0.313334

here's the complete code that i use in my projects OIDv4+YOLOAnnotation.ipynb

I hope to help someone

Thanks man you , really save me. Please use this annotations file as , the convert_annotations.py file is generating values that are out of the bound for me. Please do check the values before you train.