lifelong-robotic-vision / OpenLORIS-Scene

Issue tracker of the OpenLORIS-Scene dataset and Lifelong SLAM Challenge
20 stars 6 forks source link

How to understand 'successful tracking'? #12

Closed KaninchenM closed 4 years ago

KaninchenM commented 4 years ago

Hi, I have two questions after reading Figure2 and Figure3 in paper Are We Ready for Service Robots? The OpenLORIS-Scene Datasets for Lifelong SLAM. I think Per-sequence testing means run sequences by slam system (such as orbslam2) one by one(for example, first test office1-1, then office1-2, then office1-3...... ), meanwhile Lifelong SLAM testing means merge servel sequences to one(such as merge the office1-7 to sequence office-merged) sequence and run the merged one by slam system. image My questions are:

  1. How to define the 'successful tracking'? I've run sequence tum_rgbd- fr3_walking_xyz using dynaslam (which based on orbslam2) , the result camera trajectory was far away from groundtruth without evo 's Align operation image Without evo, I would not know how to align my result to match groundtruth as close as possible. Does the result above mean unsuccessful tracking?
  2. Which tools did you use to evaluate the estimated trajectory? Based on Evo or your team implement the evaluation by yourselves? Is there any code to share ? Thank you for reading. Waiting for your reply.
cedrusx commented 4 years ago

Hi Kaninchen, we align the trajectory against the ground-truth with the method of Horn, as explained in Sec. V in the paper. For lifelong SLAM, the alignment was made from the trajectory of the first sequence. We use the implementation by TUM.

We will open-source all our Python codes for evaluation and figure plotting, but it may take one month or so to go through the company process.

KaninchenM commented 4 years ago

Hi Kaninchen, we align the trajectory against the ground-truth with the method of Horn, as explained in Sec. V in the paper. For lifelong SLAM, the alignment was made from the trajectory of the first sequence. We use the implementation by TUM.

We will open-source all our Python codes for evaluation and figure plotting, but it may take one month or so to go through the company process.

Thank you for your reply. So it means when running pre-sequence, align the whole, while running life-long, just align the frame pose in first sequence. I'll try it. That's so nice of your team to open source code. I am waiting to check my code by yours. (ノ゚▽゚)ノ

KaninchenM commented 4 years ago

Hi Kaninchen, we align the trajectory against the ground-truth with the method of Horn, as explained in Sec. V in the paper. For lifelong SLAM, the alignment was made from the trajectory of the first sequence. We use the implementation by TUM.

We will open-source all our Python codes for evaluation and figure plotting, but it may take one month or so to go through the company process.

How to define 'successful tracking'? I've got the groudtruth.txt and the result of model such as CameraTrajectory.txt which had timestamps. How can I draw the blue line? I guess maybe I should do:

  1. Interpolate the ground-truth trajectory depending on the timestamps of estimated poses.
  2. Set an ATE threshold such as 0.1 meter.
  3. Calculate ATE of every pose and mark the poses which ATE less than ATE threshold. Those are the successful estimated poses.
  4. Calculate the length of trajectory which consists of the successful estimated poses. Draw the length basing on groundtruth's total trajectory to get the blue line. Am I right? If there are misunderstandings or good suggestions, please figure out. Waiting for your reply. Thank you.
KaninchenM commented 4 years ago

How to calculate the rmse of Incomplete tracking result? Such as the orbslam2 in market sequences. Throw away the lost post and just calculate the successfully tracking poses? image

KaninchenM commented 4 years ago

Hi Kaninchen, we align the trajectory against the ground-truth with the method of Horn, as explained in Sec. V in the paper. For lifelong SLAM, the alignment was made from the trajectory of the first sequence. We use the implementation by TUM.

We will open-source all our Python codes for evaluation and figure plotting, but it may take one month or so to go through the company process.

Have you open-source your codes for evaluation, especially the calculating of CR?

KaninchenM commented 4 years ago

Hi Kaninchen, we align the trajectory against the ground-truth with the method of Horn, as explained in Sec. V in the paper. For lifelong SLAM, the alignment was made from the trajectory of the first sequence. We use the implementation by TUM. We will open-source all our Python codes for evaluation and figure plotting, but it may take one month or so to go through the company process.

Have you open-source your codes for evaluation, especially the calculating of CR?

and How to set the ATE threshold ε and the AOE threshold φ? For example, why ε=1m in office data and ε=5m in market sequence? Although I think ε=5m is too long to define the pose is correct.

cedrusx commented 4 years ago

Hi Kaninchen, we align the trajectory against the ground-truth with the method of Horn, as explained in Sec. V in the paper. For lifelong SLAM, the alignment was made from the trajectory of the first sequence. We use the implementation by TUM. We will open-source all our Python codes for evaluation and figure plotting, but it may take one month or so to go through the company process.

How to define 'successful tracking'? I've got the groudtruth.txt and the result of model such as CameraTrajectory.txt which had timestamps. How can I draw the blue line? I guess maybe I should do:

  1. Interpolate the ground-truth trajectory depending on the timestamps of estimated poses.
  2. Set an ATE threshold such as 0.1 meter.
  3. Calculate ATE of every pose and mark the poses which ATE less than ATE threshold. Those are the successful estimated poses.
  4. Calculate the length of trajectory which consists of the successful estimated poses. Draw the length basing on groundtruth's total trajectory to get the blue line. Am I right? If there are misunderstandings or good suggestions, please figure out. Waiting for your reply. Thank you.

Yes, those are right. We did it in the same way.

cedrusx commented 4 years ago

How to calculate the rmse of Incomplete tracking result? Such as the orbslam2 in market sequences. Throw away the lost post and just calculate the successfully tracking poses? image

We made an assumption that each estimate would be valid in a user-defined period of time, such as 1 second. If there has been no success pose estimate longer than this period, the remaining time will be considered as failed. The Correct Rate is calculated as the ratio of the total time with correct estimate to the total time of the groundtruth.

cedrusx commented 4 years ago

Hi Kaninchen, we align the trajectory against the ground-truth with the method of Horn, as explained in Sec. V in the paper. For lifelong SLAM, the alignment was made from the trajectory of the first sequence. We use the implementation by TUM. We will open-source all our Python codes for evaluation and figure plotting, but it may take one month or so to go through the company process.

Have you open-source your codes for evaluation, especially the calculating of CR?

and How to set the ATE threshold ε and the AOE threshold φ? For example, why ε=1m in office data and ε=5m in market sequence? Although I think ε=5m is too long to define the pose is correct.

These parameters were set empirically. A proper threshold would depend not only on the scene area, but also on the accuracy of the algorithm or the expected accuracy. In our experiments, we found that bacause of relatively large drifts, successful tracking would be categorized as incorrect if we set a threshold too small. Instead, we want all successful tracking to be considered as correct, except for the mis-matched or mis-aligned cases. The accuracy should be measured by other metrics (ATE etc.) rather than CR.