Closed Gatsby23 closed 1 year ago
Hi @Gatsby23 -- the overhead for covariance calculation mainly depends on the size of the point cloud (in your case, current_features
) and the number of correspondences used to calculate each covariance (k_correspondences_
). Try playing around with those values in your project!
Dear Professor, Thank you for your wonderful work on LiDAR-Odometry. One of the main contributions of this paper is its focus on improving efficiency. From my point of view, the time costs in LiDAR-Odometry primarily involve two parts: building the KD-Tree and conducting searches that rely on the pre-built KD-Tree. The purposed recycling datasture can efficiently resolve the aforementioned question, while the latter one can be solved by using the OpenMP for parallel acceleration. However, when I split out the covariance calculation into a standalone project, its performance(5-25ms) was much slower compared to the same procedure in the direct-lidar-odometry(2-8ms). I don't know what went wrong. Could you please help me check if there is any mistake in my code or if there are any tricks that I might have missed? The code is listed below, thank you very much.