RobustFieldAutonomyLab / LeGO-LOAM

LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain
BSD 3-Clause "New" or "Revised" License
2.35k stars 1.11k forks source link

When publishing TF, how to understand the quaternion transformation? #143

Closed heroacool closed 4 years ago

heroacool commented 4 years ago

image

Hi @TixiaoShan , when reading the code, I'm confused with the quaternion transformation. It seems that there is no abvious geometry meaning when convert a quaternion from (qx, qy, qz, qw) to (-qy, -qz, qx, qw). Would you please explain this operation? Thanks!

TixiaoShan commented 4 years ago

Ji Zhang likely converted odometry from the lidar frame to the camera frame for his later paper (V-LOAM). I think this is the reason behind this. Feel free to convert them back and publish odometry in the lidar frame for your own needs.

narutojxl commented 4 years ago

Hi @TixiaoShan, Thanks your work! According to your guess, i test as following, but they all do not consistent Ji Zhang's result, what do you think? Look forward to your reply, thanks!

guess_test(transformSum[0], transformSum[1], transformSum[2]);
void guess_test(float& sum0, float& sum1, float& sum2) {
    geometry_msgs::Quaternion q_camera_laser_msg, q_laserT_msg, result, geoQuat;
    tf::Quaternion q_camera_laser, q_laser_camera, q_laser_t;

    //camera_frame: X-Y-Z(right-down-front)
    //laser_frame:  X-Y-Z(left-up-front)

    {                                                                             //camera --->laser
        q_camera_laser_msg = tf::createQuaternionMsgFromRollPitchYaw(0, 0, M_PI); //Rz(pi)Ry(0)Rx(0)
        tf::quaternionMsgToTF(q_camera_laser_msg, q_camera_laser);

        //laser_0 ---> laser_t (laser_t in laser_0 pose)
        q_laserT_msg = tf::createQuaternionMsgFromRollPitchYaw(sum0, sum1, sum2); //Rz(sum2)Ry(sum1)Rx(sum0)
        tf::quaternionMsgToTF(q_laserT_msg, q_laser_t);

        //camera_0 ---> laser_t(get laser_t in camera_0 pose)
        q_camera_laser *= q_laser_t;
        tf::quaternionTFToMsg(q_camera_laser, result);
        std::cout << "We guess1: camer_0 ---> laser_t\n("
                  << result.x << " " << result.y << " " << result.z << " " << result.w << ")\n";

        //laser --> camera
        q_camera_laser_msg = tf::createQuaternionMsgFromRollPitchYaw(0, 0, -M_PI);
        tf::quaternionMsgToTF(q_camera_laser_msg, q_laser_camera);

        //camera_0 ---> camera_t
        q_camera_laser *= q_laser_camera;
        tf::quaternionTFToMsg(q_camera_laser, result);
        std::cout << "We guess2: camera_0 ---> camera_t\n("
                  << result.x << " " << result.y << " " << result.z << " " << result.w << ")\n";

        //laser_0 ---> camera_t
        q_laser_camera *= q_camera_laser;
        tf::quaternionTFToMsg(q_laser_camera, result);
        std::cout << "We guess3: laser_0 ---> camera_t\n("
                  << result.x << " " << result.y << " " << result.z << " " << result.w << ")\n";
    }

    geoQuat = tf::createQuaternionMsgFromRollPitchYaw(sum2, -sum0, -sum1);
    std::cout << "Zhang Ji:\n("
              << -geoQuat.y << " " << -geoQuat.z << " " << geoQuat.x << " " << geoQuat.w << ")\n\n\n";
}   

The result:

We guess1: camer_0 ---> laser_t (0.733841 -0.0905687 0.672771 0.0255742) We guess2: camera_0 ---> camera_t (0.0905687 0.733841 -0.0255742 0.672771) We guess3: laser_0 ---> camera_t (0.733841 -0.0905687 -0.672771 -0.0255742) Zhang Ji: (0.517963 -0.527673 -0.0255742 0.672771)

We guess1: camer_0 ---> laser_t (0.736065 -0.0969521 0.669447 0.0254989) We guess2: camera_0 ---> camera_t (0.0969521 0.736065 -0.0254989 0.669447) We guess3: laser_0 ---> camera_t (0.736065 -0.0969521 -0.669447 -0.0254989) Zhang Ji: (0.520196 -0.529705 -0.0254989 0.669447)

TixiaoShan commented 4 years ago

Sorry, I don't understand what you are trying to do here - publish odometry in lidar frame?

narutojxl commented 4 years ago

Sorry, i don't express myself clearly.
As we know, transformSum is the laser frame's pose in laser_0 frame(laser frame is X-Y-Z(left-up-front), laser_0 frame is the start position and attitude of laser frame ).

Ji Zhang likely converted odometry from the lidar frame to the camera frame for his later paper (V-LOAM). I think this is the reason behind this.

I do a test to figure out what rotation transformation applied to transformSum's rotation by Ji Zhang. The 3 transformations(red, blue, black) are all not equal to Ji Zhang published odometry TF, you can see the above result. What is the camera frame defined in your comment, something i missing? Thanks for your help a lot :) image

TixiaoShan commented 4 years ago

Hello, the frame definition is from Ji Zhang's LOAM code, I didn't change anything of it.

TixiaoShan commented 4 years ago

Another thread mentioned the frame conversion. https://github.com/laboshinl/loam_velodyne/issues/134

heroacool commented 4 years ago

Ji Zhang likely converted odometry from the lidar frame to the camera frame for his later paper (V-LOAM). I think this is the reason behind this. Feel free to convert them back and publish odometry in the lidar frame for your own needs.

OK, Thanks