MISTLab / Intensity_based_LiDAR_SLAM

Real-Time Simultaneous Localization and Mapping with LiDAR intensity
MIT License
120 stars 9 forks source link

Some questions about the code #3

Closed gzyabc closed 9 months ago

gzyabc commented 9 months ago

Hello dear author, here I am again. 1.Firstly,I would like to ask if this code is applicable to the velodyne radar processing intensity map? 2.I would like to ask if your test site is flat, and the actual running of the code feels that the Z-axis drift is a little large and I don't know if it is uphill.It seems to me that your ground constraint in ceres is constraining the quaternion, the rotation part. 3.Is the final output optimized path output from aloam? . I see you in the code outPoses_ open ("/home/snow/polyWorkSpace/compare_ws/SRC/A - LOAM resultsOutput/poses_aloam. TXT "); So the purpose of this strength chart is to get a better front end odometer? Code section is pubMergedOdometry publish (odom), the front-end made a mixed odometer; And the odometer based on intensity is preferred and if the intensity map does not match well it is supplemented by aloam odometer. In other words,which is the function of the strength chart and the combination of the strength odometer with aloam which I don't quite understand and if you have the time can you help me to read it would be very grateful if you can answer me.

SnowCarter commented 9 months ago

Hi, I appreciate your positive response and I'll do my best to help!

  1. Unfortunately, it doesn't support Velodyne now, we only tested with Ouster Lidar. Maybe you can try to convert the Velodyne Pointcloud into an intensity image and try this algorithm.
  2. In the provided rosbag example, we conducted tests on a flat floor. However, this algorithm is adaptable to uphill environments by either deactivating the ground constraint in Ceres or employing Patchwork++ to extract the ground plane (even with uphill), offering an alternative to the current solution.

3.1. Actually we used this code to generate the final poses, but unfortunately, it was removed when the code was released:

if(out_keyposes_.is_open()){
        for(size_t i = 0; i < cloudKeyPoses6D_->points.size(); i++){
            ROS_INFO("writing keyposes to file, in process: %f percent", (float)i/cloudKeyPoses6D_->points.size()*100);
            PointXYZIRPYT p_tmp = cloudKeyPoses6D_->points[i];
            gtsam::Rot3 rot_temp = gtsam::Rot3::RzRyRx(p_tmp.yaw, p_tmp.pitch, p_tmp.roll);
            out_keyposes_ << std::fixed << p_tmp.time << " " << p_tmp.x << " " << p_tmp.y  << " " << p_tmp.z  << " " << rot_temp.toQuaternion().x()  << " " << rot_temp.toQuaternion().y()  << " " << rot_temp.toQuaternion().z() << " " << rot_temp.toQuaternion().w() << "\n"; 
        }
    }

Regarding the following path: /home/snow/polyWorkSpace/compare_ws/src/A - LOAM resultsOutput/poses_aloam.txt, this represents the output pose generated by A-LOAM for the purpose of comparison. However, this part code was not used in the current algorithm. Please check the launch file for more details.

3.2 The pubMergedOdometry function serves as a solution to maintain odometry functionality in environments with minimal texture. In our provided rosbag example, the odometry of A-LOAM is predominantly inactive, possibly functioning for only 1 or 2 frames. Nevertheless, this feature proves highly beneficial when encountering locations lacking texture. An indication such as "intensity_odom_diff: xxx" will be displayed on the screen to signify the omission of intensity odometry. For further information, please consult this link.

gzyabc commented 9 months ago

Thank you for your patience in answering my questions. So your code does a slam in a weakly textured environment with generated intensity images, and the back end is optimized using BA, with ground constraints and loopback detection.Do you think I understand this correctly? My idea is to use your intensity image BA as a constraint coupled with the traditional aloam joint constraint of corner and surface points to work together on the back end. I wonder if this result will make the final positioning accuracy higher? Of course, in the case of weak textures, the effect may still depend on the intensity of the image you present.

SnowCarter commented 9 months ago

In fact, quite the opposite is true; our algorithm performs better in texture-rich areas. Our algorithm integrated not only BA, ground constraints, and loop constraints, but also plane constraints and edge constraints (edge constraint is optional). Your proposed idea is excellent, and actually, we have already integrated that idea into our algorithm, which we are currently open-sourcing. This integration allows the algorithm to exhibit stable performance not only in texture-rich environments but also in texture-poor environments. You only need to configure the relevant parameters to enable the edge constraints.

gzyabc commented 9 months ago

Ok, then I will read the algorithm again carefully, thank you for your answer, it helps me a lot. I personally feel that there is still room for the development of non-deep learning applications in the intensity value, but there are not too many related open algorithms. Do you have any good recommendations? The intensity-SLAM we have seen before is an open algorithm related to intensity values, but unfortunately, as you said in your paper, it cannot run the open source code successfully, and I feel that its real-time performance needs to be improved.