HITSZ-NRSL / Dynamic-VINS

[RA-L 2022] RGB-D Inertial Odometry for a Resource-restricted Robot in Dynamic Environments
310 stars 40 forks source link

Output of Estimated Trajectory #6

Closed Kyle-Xu001 closed 1 year ago

Kyle-Xu001 commented 1 year ago

Hi there. I want to evaluate the estimated pose with the ground truth pose over one entire trajectory. Is there any tool or function available already for this to output or save the pose result?

jianhengLiu commented 1 year ago

The evaluation tools used in paper are available at https://github.com/lifelong-robotic-vision/openloris-scene-tools

Kyle-Xu001 commented 1 year ago

Hi there. I am evaluating the error between the estimated and ground truth poses. In our case, our initialization process is under a dynamic movement. And our robot started from a random position in the world frame. So, I have to find the transformation between the real start position $[x_0^w, y_0^w, z_0^w, ...]$ and the estimated start position in map frame $[x_0^m, y_0^m, z_0^m, ...]=[0, 0, 0,...]$.

However, I found the first published estimated pose was not 0, 0, 0 but with a small transformation. I assume this first estimated pose is the result generated from IMU messages during the initialization. Which one would you suggest to find a compensated transformation for evaluation between the estimated pose and the ground truth pose?

One option is using the first estimated pose to align with the ground truth pose with the same timestamp. In this way, we have to neglect the error during the initialization, which means calculating the transform between $[x_n^w, y_n^w, z_n^w, ...]$ and $[x_n^m, y_n^m, z_n^m, ...]$.

Another option is to find the timestamp that represents the real start position $[x_0^w, y_0^w, z_0^w, ...]$. However, we can not guarantee where to find this timestamp. It should be time between 0.0 and the time when initialization is finished. Do you have any idea where to find this timestamp?

Thank you for your time.

jianhengLiu commented 1 year ago

In my view, if you just want to have a rough evaluation both should be okay. While, the actual aligned timestamp is difficult to get. If you are trying to give a rigorous evaluation, it is recommended to align them in a post process manner using tools like https://github.com/MichaelGrupp/evo, etc.

eliabntt commented 1 year ago

Hi @jianhengLiu sorry if I jump in but I'm also interested in this.

I think between the two methods there is a fundamental difference. In one case we're saying "ok, we neglect the error that is produced by the first output" (which may be propagated throughout the whole experiment), in the second case you are saying that we cannot know the exact starting point.

It sounds a bit fuzzy to me. Let's try to clarify

Let's say i have a ros bag that starts at time 0 from P0. I play the ros bag along with your algorithm.

The first pose we get in output from dynamic vins after some failure in the initialization is at time t=2s. This is already positioned somewhere and it's not at 0,0,0 but it's the result of an estimation step already.(if i understood your code correctly). Let's call the GT position here P2.

Now, in those two seconds my robot has moved.

We have three options: 1) consider my first P0 as the initial transformation between the two trajectories 2) consider P2 and the start of the evaluation aligned with when you output the first estimation 3) consider P1, which should be the position of the robot at the time of the first information used by dynamic-vins to output the first estimated odometry pose.

I sense that the most convenient is #2 while the most correct is #3.

For #3 we need the first time stamp that is used during the initialization and on the set of readings used by the odometry estimator. This timestamp should be double t_0 = Headers[0]; if I'm not mistaken. However, you said it's difficult to get so I'm wondering what am I missing here. Once I have that timestamp i can search for the gt pose at that point in time (or immediately previous to that) to transform the resulting trajectory (while adding to it a fake pose at 0,0,0,0,0,0) and use the open Loris tools to do the evaluation. Am i wrong?

Since we're talking ros bags, once we have it we can get the gt pose by using the gt bag and transform everything accordingly before doing the evaluation.

eliabntt commented 1 year ago

I also looked to this Evo package of which I was unaware. They have indeed an align-origin parameter. Were you suggesting using that? In such case, we still need the timestamp at which we should align the two trajectories.

jianhengLiu commented 1 year ago

Hi @eliabntt, I am sorry that I didn't clarify 'actual aligned timestamp'. In my own experiments, I obtained gt from motion capture with ros timestamp (by vrpn_client_ros) which is not the exact time when the gt pose is caught. Sometime, a big delay trouble me a lot in other applications. Take a example on evo at https://github.com/MichaelGrupp/evo/wiki/evo_traj#temporal-alignment, it is aimed for this problem. Trajectory evaluation tools would try to search the best matching between the reference and the other trajectories (transformation, timestamp, scale). Therefore, the aligned output might have a different timestamp at the start point. It might be slightly different from what you evaluate for. I do think the methods you mentioned are right; and normally people use a recognized evaluation tools to compare fairly, but I think the evaluation methods should depend on the actual metrics you need.

eliabntt commented 1 year ago

Ok, so, just to clarify, your first odometry output reference, aka 0,0,0, starts from header[0] stamp. Right? I am using TUM rgbd alignment which is fairly similar. We just want to understand our starting location I guess here :)

From my understanding the initialization takes into account a buffer from 0 to buffer length. Thus, our groundtruth at time == header[0] should be the first starting location that we should use as static transform to have the two trajectories referencing the same starting point.

Right? Or are you using something else to output the first odometry message?

On Thu, 3 Nov 2022, 07:41 ChrisLiu, @.***> wrote:

Hi @eliabntt https://github.com/eliabntt, I am sorry that I didn't clarify 'actual aligned timestamp'. In my own experiments, I obtained gt from motion capture with ros timestamp (by vrpn_client_ros) which is not the exact time when the gt pose is caught. Sometime, a big delay trouble me a lot in other applications. Take a example on evo at https://github.com/MichaelGrupp/evo/wiki/evo_traj#temporal-alignment, it is aimed for this problem. Trajectory evaluation tools would try to search the best matching between the reference and the other trajectories (transformation, timestamp, scale). Therefore, the aligned output might have a different timestamp at the start point. It might be slightly different from what you evaluate for. I do think the methods you mentioned are right; and normally people use a recognized evaluation tools to compare fairly, but I think the evaluation methods should depend on the actual metrics you need.

— Reply to this email directly, view it on GitHub https://github.com/HITSZ-NRSL/Dynamic-VINS/issues/6#issuecomment-1301702661, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXDUJXFIDQFP3MBSC7KWVLWGNNA7ANCNFSM6AAAAAARMMW66M . You are receiving this because you were mentioned.Message ID: @.***>

jianhengLiu commented 1 year ago

Ok, so, just to clarify, your first odometry output reference, aka 0,0,0, starts from header[0] stamp. Right? I am using TUM rgbd alignment which is fairly similar. We just want to understand our starting location I guess here :) From my understanding the initialization takes into account a buffer from 0 to buffer length. Thus, our groundtruth at time == header[0] should be the first starting location that we should use as static transform to have the two trajectories referencing the same starting point. Right? Or are you using something else to output the first odometry message? On Thu, 3 Nov 2022, 07:41 ChrisLiu, @.> wrote: Hi @eliabntt https://github.com/eliabntt, I am sorry that I didn't clarify 'actual aligned timestamp'. In my own experiments, I obtained gt from motion capture with ros timestamp (by vrpn_client_ros) which is not the exact time when the gt pose is caught. Sometime, a big delay trouble me a lot in other applications. Take a example on evo at https://github.com/MichaelGrupp/evo/wiki/evo_traj#temporal-alignment, it is aimed for this problem. Trajectory evaluation tools would try to search the best matching between the reference and the other trajectories (transformation, timestamp, scale). Therefore, the aligned output might have a different timestamp at the start point. It might be slightly different from what you evaluate for. I do think the methods you mentioned are right; and normally people use a recognized evaluation tools to compare fairly, but I think the evaluation methods should depend on the actual metrics you need. — Reply to this email directly, view it on GitHub <#6 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXDUJXFIDQFP3MBSC7KWVLWGNNA7ANCNFSM6AAAAAARMMW66M . You are receiving this because you were mentioned.Message ID: @.>

Right! Headers[0] corresponds the [0,0,0] time.

eliabntt commented 1 year ago

And if it fails initialization what happens to header[0]?

On Thu, 3 Nov 2022, 12:58 ChrisLiu, @.***> wrote:

Ok, so, just to clarify, your first odometry output reference, aka 0,0,0, starts from header[0] stamp. Right? I am using TUM rgbd alignment which is fairly similar. We just want to understand our starting location I guess here :) From my understanding the initialization takes into account a buffer from 0 to buffer length. Thus, our groundtruth at time == header[0] should be the first starting location that we should use as static transform to have the two trajectories referencing the same starting point. Right? Or are you using something else to output the first odometry message? … <#m_-1295226198739758600m-6453784618389387479_> On Thu, 3 Nov 2022, 07:41 ChrisLiu, @.> wrote: Hi @eliabntt https://github.com/eliabntt https://github.com/eliabntt https://github.com/eliabntt, I am sorry that I didn't clarify 'actual aligned timestamp'. In my own experiments, I obtained gt from motion capture with ros timestamp (by vrpn_client_ros) which is not the exact time when the gt pose is caught. Sometime, a big delay trouble me a lot in other applications. Take a example on evo at https://github.com/MichaelGrupp/evo/wiki/evo_traj#temporal-alignment https://github.com/MichaelGrupp/evo/wiki/evo_traj#temporal-alignment, it is aimed for this problem. Trajectory evaluation tools would try to search the best matching between the reference and the other trajectories (transformation, timestamp, scale). Therefore, the aligned output might have a different timestamp at the start point. It might be slightly different from what you evaluate for. I do think the methods you mentioned are right; and normally people use a recognized evaluation tools to compare fairly, but I think the evaluation methods should depend on the actual metrics you need. — Reply to this email directly, view it on GitHub <#6 (comment) https://github.com/HITSZ-NRSL/Dynamic-VINS/issues/6#issuecomment-1301702661>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXDUJXFIDQFP3MBSC7KWVLWGNNA7ANCNFSM6AAAAAARMMW66M https://github.com/notifications/unsubscribe-auth/AEXDUJXFIDQFP3MBSC7KWVLWGNNA7ANCNFSM6AAAAAARMMW66M . You are receiving this because you were mentioned.Message ID: @.>

Right! Headers[0] corresponds the [0,0,0] time.

— Reply to this email directly, view it on GitHub https://github.com/HITSZ-NRSL/Dynamic-VINS/issues/6#issuecomment-1301994971, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXDUJT6E4AZP5MGMZOWWY3WGOSHBANCNFSM6AAAAAARMMW66M . You are receiving this because you were mentioned.Message ID: @.***>

jianhengLiu commented 1 year ago

Keyframe0's state will be marginized and header[0] will be replaced by header[1] in the system. While, the replaced headers[0] should still represent the [0,0,0] time as its corresponding pose is also marginized as a prior.

eliabntt commented 1 year ago

Perfect! Thanks!

On Thu, 3 Nov 2022, 13:15 ChrisLiu, @.***> wrote:

Keyframe0's state will be marginized and header[0] will be replaced by header[1] in the system. While, the replaced headers[0] should still represent the [0,0,0] time as its corresponding pose is also marginized as a prior.

— Reply to this email directly, view it on GitHub https://github.com/HITSZ-NRSL/Dynamic-VINS/issues/6#issuecomment-1302014487, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXDUJUIZFF5GLE2MREUKTDWGOUHBANCNFSM6AAAAAARMMW66M . You are receiving this because you were mentioned.Message ID: @.***>