timmeinhardt / trackformer

Implementation of "TrackFormer: Multi-Object Tracking with Transformers”. [Conference on Computer Vision and Pattern Recognition (CVPR), 2022]
https://arxiv.org/abs/2101.02702
Apache License 2.0
500 stars 116 forks source link

How to Prepare Custom Dataset for TrackFormer Multi-Object Tracking with COCO-Style Annotations? #130

Open userkw2 opened 3 weeks ago

userkw2 commented 3 weeks ago

hi @timmeinhardt I want to use TrackFormer for multi-object tracking with my own dataset. According to the instructions, I need to create a COCO-style annotation file and extend it with fields like seq_length, first_frame_image_id, and track_id for multi-object tracking. I'm unsure how to prepare my dataset with these extensions and how to use the generate_coco_from_mot.py script. Could you guide me on how to structure my dataset and generate the necessary COCO-style annotations for TrackFormer?

Maxvgrad commented 3 weeks ago

Hi @userkw2 I can provide some details in the meantime.

I'm unsure how to prepare my dataset with these extensions and how to use the generate_coco_from_mot.py script.

First, it's important to understand the COCO dataset, which is a large-scale object detection dataset. You can learn more about it at COCO Dataset. The COCO dataset has a specific directory structure:

|-- data
    |-- custom_dataset_converted_to_coco
    |   |-- train
    |   |   |-- *.jpg
    |   |-- val
    |   |   |-- *.jpg
    |   |-- annotations
    |   |   |-- train.json
    |   |   |-- val.json

You can search online to find the format of the annotation JSON files. For example, one of the top search results is this guide on COCO dataset format.

how to use the generate_coco_from_mot.py script.

You won’t be able to use the script as it is because it’s designed to convert MOT datasets like MOT17, MOT20, and MOTS20 to the COCO format. The MOT datasets have their own structure, which likely won’t match your custom dataset.

However, the easiest solution might be to copy the script and modify it for your own dataset, as the output should be the same—a dataset in COCO format. To do this, you would need to understand how the generate_coco_from_mot.py script works.

This Colab notebook might be useful, as it demonstrates how to download an MOT dataset and convert it to COCO format. You can see the directory structure of MOT dataset before and after conversion.

userkw2 commented 2 weeks ago

Hi @Maxvgrad ,

Thanks for the detailed explanation and for sharing the Colab notebook!

I've already converted my dataset to COCO format, but my question is about how to add the additional fields needed for multi-object tracking: seq_length, first_frame_image_id, and track_id to my JSON files for train and val.

Any advice on how to extend the COCO annotations with these fields would be greatly appreciated!

Thanks again for your help and for the Google Colab link!