neuroinformatics-unit / movement

Python tools for analysing body movements across space and time
http://movement.neuroinformatics.dev
BSD 3-Clause "New" or "Revised" License
77 stars 7 forks source link

Support reading trajectories from tracked bounding boxes in VGG annotator format #167

Open sfmig opened 2 months ago

sfmig commented 2 months ago

Is your feature request related to a problem? Please describe. We use off-the-shelf multi-object detectors to detect and track animals in some of our work. It would be nice if the output of these trackers could be analysed with movement.

Describe the solution you'd like Read results from detection and tracking in the format used by VGG annotator and load them as trajectories of the bounding boxes' centres.

Describe alternatives you've considered We use VGG annotator and follow its format for outputing tracks. However, there may be more widespread used formats, such as MOT Challenge format or KITTI. Most seem to be csvs with specific headers, so we may want to make our solution flexible enough to load a variety of headers.

We could also consider reading data in COCO format, although strictly speaking COCO does not keep track of the objects/animals identities. It could be used if a single animal or object is tracked, or if we assume an additional attribute is added to the annotations to keep track of their identitiy.

Additional context Add any other context or screenshots about the feature request here.