spencer-project / spencer_people_tracking

Multi-modal ROS-based people detection and tracking framework for mobile robots developed within the context of the EU FP7 project SPENCER.
http://www.spencer.eu/
656 stars 326 forks source link

Ospa Metric #67

Open louis-gu opened 5 years ago

louis-gu commented 5 years ago

Hi! I want to thank you with the work you did coding all that stuf but I am missing something with your OSPA metric. I read the original paper: http://www.dominic.schuhmacher.name/papers/ospa.pdf If I got it right, I get an m*n distance matrix from their algorithm. Why do you compute a n*n distance matrix with a "c=10" padding? Even in the original implementation you cited (https://github.com/cheesinglee/cuda-PHDSLAM/blob/master/python/ospa.py), they use a m*n matrix without padding. In the end, with p=1, your code behaves as if you where adding (n-m)*c/n to your OSPA metric even though missed detections or false alarms are already taken into account by the second term of the OSPA metric. Could you please either enlighten me or correct this behavior if I am not mistaken?

Once again, I thank you deeply for the efforts you put in your code.

Best regards,

Louis

tlind commented 4 years ago

Hi, first of all sorry for the late response! Without having re-read the OSPA paper and just by looking at the code, I think the reason for creating a square cost matrix is that by default, many solvers for the Hungarian problem only support square input matrices. It seems that pymunkres in the latest version also supports solving rectangular cost matrices. In that case, according to their documentation, one shall pad it with 0 values -- instead, we use MAX_COST.

However, I believe that does not really make a difference: We will get a solution with at most min(m,n) assignments, and since our padding is MAX_COST, those MAX_COST matrix elements should not end up in any assignment (as there should be a cheaper solution).

Remember that the following loop only iterates over the resulting assignments: https://github.com/spencer-project/spencer_people_tracking/blob/bcde8b78ea07f65920966a4d659ba8ef61aed9bc/tracking/people/spencer_tracking_metrics/src/spencer_tracking_metrics/ospa/ospa.py#L135

I believe also the OSPA paper recommends to use a square matrix in practice. See p.5, "C. computation of the OSPA metric":

we use the distance matrix D = (D{i,j}){1<=i,j<=n} where D_{i,j} = d^(c)(x_i,yi)^p if 1<=i<=m and 1<=j<=n, and D{i,j}=c^p otherwise.

I think that's exactly what we are doing.

louis-gu commented 4 years ago

Hello! Thank you for your reply.

I get your point. After "re-reading" the parts you highlighted in your code and the article, I still believe their is an issue. In particular, when you say:

However, I believe that does not really make a difference: We will get a solution with at most min(m,n) assignments, and since our padding is MAX_COST, those MAX_COST matrix elements should not end up in any assignment (as there should be a cheaper solution).

If m<n, why would you get min(m,n) assignments with a MAX_COST pre-filled n*n matrix? The Munkres algorithm returns the n pairs of indices that lead to the lowest cost of assignment: at least (n-m) indices will point to MAX_COST values with a MAX_COST pre-filled n*n matrix.

To put it under another perspective, in the original paper they do pad the n*n matrix with MAX_COST values, but it is just to avoid to explicit the term ((n-m)*c^p) in the equation of the OSPA metric (eq. (3) p.4), as they point it out in III.C p.5:

We use the distance matrix D=(D{i,j})1≤i,j≤n, where D{i,j}=d(c)(xi, yj)^p if 1≤ i ≤ m and 1 ≤ j ≤ n, and D{i,j} = c^p otherwise. This corresponds to the introduction of n−m “dummy” points at distance ≥ c of any points in Y that was described earlier.

So with the MAX_COST padding, in your code line 160, total_loc already incorporates (n-m)*self.c^self.p, you don't need to add it again: https://github.com/spencer-project/spencer_people_tracking/blob/bcde8b78ea07f65920966a4d659ba8ef61aed9bc/tracking/people/spencer_tracking_metrics/src/spencer_tracking_metrics/ospa/ospa.py#L160