cheind / py-motmetrics

:bar_chart: Benchmark multiple object trackers (MOT) in Python
MIT License
1.37k stars 258 forks source link

Why is my MOTP greater than 1? #54

Closed heethesh closed 4 years ago

heethesh commented 4 years ago
         IDF1   IDP   IDR  Rcll  Prcn GT MT PT ML  FP FN IDs  FM  MOTA   MOTP
Overall 67.5% 63.8% 71.7% 91.2% 81.1%  4  4  0  0 198 82  11  25 68.8% 19.907

These are the results that I'm getting on my custom dataset. I expect the MOTP to be in the range 0-1, however as you see its 19.907. What could be the reason for this. I'm computing my cost matrix as follows.

cost_matrix = mot.distances.norm2squared_matrix(ground_truth, detections, max_d2=max_distance)
acc.update(gt_labels, tracks.keys(), cost_matrix)

I am working on a 2D RADAR tracker, not bounding boxes on images. Could be an issue with IoU assumptions made in computing cost matrix?

heethesh commented 4 years ago

@cheind any suggestions?

eduardovelludo commented 4 years ago

From article: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics

The multiple object tracking precision (MOPT): It is the total error in estimated position for matched object-hypothesis pairs over all frames, averaged by the total number of matches made. It shows the ability of the tracker to estimate precise object positions, independent of its skill at recognizing object configurations, keeping consistent trajectories, and so forth.

So the result of motp depends on your error metric. If you're using norm2squared_matrix your result will be the distance error average, on the other hand, if you are using iou_matrix your result will be the iou distance error average (in percentage).

heethesh commented 4 years ago

Got it, I'm considering creating a rectangular area around the detections and ground truth to use iou_matrix.