vectr-ucla / direct_lidar_inertial_odometry

[IEEE ICRA'23] A new lightweight LiDAR-inertial odometry algorithm with a novel coarse-to-fine approach in constructing continuous-time trajectories for precise motion correction.
MIT License
576 stars 115 forks source link

Something wrong with aggressive motion data #15

Open CharlieV5 opened 1 year ago

CharlieV5 commented 1 year ago

Hi, dear author! Thank you for your work! I have tested the program with your example data, it runs very well. I am interesting in the situation when spinning lidar is in a very aggressive motion. So I tested your program with another data with aggressive motion in staircase. Here it is. https://drive.google.com/drive/folders/1f-VQOORs1TA5pT-OO_7-rG0kW5F5UoGG I just tested the 2022-08-30-20-33-52_0.bag. The odometry has a very great drift at the very beginning. I believe there's something wrong with the extrinsic parameters, so I try serveral times. It turns out that this parameter may be approximate as the picture shows below. 2023-07-23 21-44-15 的屏幕截图 But, still it gets drift after some seconds when walking into a very narrow space. Should I set the accurate extrinsic parameters, or something is wrong with this aggressive motion in narrow space? By the way, how do you define the extrinsics baselink2imu and baselink2lidar? Do you mean baselink2lidar as P_L = R*P_b2l + t? 2023-07-23 21-53-19 的屏幕截图

narutojxl commented 1 year ago

I think author's baselink2imu and baselink2lidar is for frame, not for point. In other words, baselink2imu transform a point from imu frame into baselink frame. You can also search keywords "this->extrinsics.baselink2lidar_T" in the odom.cc to find its meaning.

I find when pointcloud/voxelize is false, use my own data the result is diverge. But when it is true, the result is ok.

kennyjchen commented 1 year ago

Hi @CharlieV5 -- yes, extrinsics need to be accurate for DLIO to work properly.

EyedBread commented 10 months ago

What convention is used when setting the extrinsics? Do we first set the translational component from baselink frame to lidar/imu frame and then the rotational component, or do we first set the rotational component and then the translational component? Based on the approach, I get different outputs based on the 2 different approaches.

Edit: @narutojxl Also you mention that "baselink2imu transform a point from imu frame into baselink frame". Shouldn't it be the opposite? That you specify a transform from the baselink frame to the imu frame.

narutojxl commented 10 months ago

@EyedBread , use 4*4 matrix T= [R, t; 0, 1] to transform a point [p, 1] to get a transformed p' = R* p + t. I think its convention is rotational frist to make the two frames aligned, and then calculate translational component. We can view baselink2imu transformation from two perspectives. 1) If we start with baselink frame, after a series of rotations(consist of rotation component), we get a frame its thress axis(x-axis, y-axis, z-axis) respectively aligned with imu's x-axis, y-axis, z-axis. And then we can get imu frame's origin point in transformed frame, namely translation component. As you can see, from frame's perspective, its direction is from baselink to imu.

2) Suppose there is a point p in the world frame, we can view it in imu frame and baselink frame. In imu frame its coordinates is p_imu, corresponding p_baselink in baselink frame. p_imu after 4*4 matrix T= [R, t; 0, 1] its coordinates is equal to p_baselink. As you can see, from point's perspective, its direction is from imu to baselink.

so the author variable naming conventions, baselink2imu, is frame perspective, not for point.