cheind / py-motmetrics

:bar_chart: Benchmark multiple object trackers (MOT) in Python
MIT License
1.37k stars 259 forks source link

Evaluate tracker performance on multiple classes #117

Closed ragavendrams closed 4 years ago

ragavendrams commented 4 years ago

Hi, I would like to evaluate the performance of my tracker on 2 different classes - person and another custom class. I was wondering how to setup the ground truth for a custom dataset so that I can use py-motmetrics.

Here class_id and visibility is from the data_format 'FrameId', 'Id', 'X', 'Y', 'Width', 'Height', 'Confidence', 'ClassId', 'Visibility', 'unused' Thanks in advance.

cheind commented 4 years ago

Hey, you could assign ClassId=1 for Pedestrian and ClassId=2 for your other class. Then if you want to just compute results for pedestrians you could do

gt_filtered = gt[gt['ClassId'] == 1] (and similar for the testfile) before computing the metrics.

ragavendrams commented 4 years ago

This did not work for me. Tested it on a video with only one class (2) and I get the following results: image

I can see in evaluatetracking.py that include_all is set to False by default. So only class_ids of 1 are considered I guess. So I set all the classids to 1 in my GT instead of 2 and I get better results. image.

Howver this does not let me find out classwise metrics. So I set include_all=True and set the classId back to 2 and check again. I expected that this should give the same results as above because the classid should not matter when there is only 1 classid. But this is what i get: image

In the last run, for some reason all detections are ignored and result in false negatives (352 frames in total) even though there is overlap between GT and the tracked boxes.

Any idea on how to proceed here?

Thanks in advance!

UPDATE: I guess the nan was due to the prepprocessing in evalTracking.py, which considered classid = 2 as one of the distractors (removing all the distractors did not work for some reason). Using eval_motchallenge.py instead solved the problem.

cheind commented 4 years ago

evalTracking is for a particular dataset. The default is https://github.com/cheind/py-motmetrics/blob/develop/motmetrics/apps/eval_motchallenge.py which does not use include_all. Note that you have to filter not only the ground-truth but also your predictions.

ragavendrams commented 4 years ago

Thanks for the help! Will close the issue.