zenseact / zod

Software Development Kit for the Zenseact Open Dataset (ZOD)
https://zod.zenseact.com
MIT License
92 stars 13 forks source link

Converting to Yolo Format #42

Closed TheBigCodeman closed 3 months ago

TheBigCodeman commented 4 months ago

I am new to coding, and looking to train a YoloV7/8 traffic sign and light detection and classification model. I noticed the blur and dnat images were raw and un-annotated and the annotations were in .json format. What is the best way to convert all to Yolo format please? Should this be done manually and the images be relabelled and exported using Roboflow, for example? This seems quite time consuming, so what is recommended for all the image annotations, maybe 10% and then use this to train the rest? Thank you and I apologise if I am missing the obvious.

wljungbergh commented 4 months ago

Hi,

We have some utility scripts that might be helpful to you. First, you have zod/cli/extract_tsr_patches.py which can crop out all traffic signs in every image and place each of them in a folder with the name of the traffic sign as the folder name. This is the same as many other classification datasets (including ImageNet).

Second, you can use zod/cli/generate_coco_json.py to generate coco-style annotations for your detection pipeline. However, currently, this is done only for objects (cars, pedestrians, bicycles, etc.,), but it should be fairly easy to modify it to instead output the annotations for different traffic signs only.

Hope this gives you some inspiration on where to start.

wljungbergh commented 3 months ago

Feel free to open up this again if you have any questions.