Closed tigerschweiger closed 3 years ago
@tigerschweiger yes, the feature-less tunnel is very hard for LiDAR-inertial methods such as LIO-SAM. The Vins-fusion we tested also failed in such scenarios but has the potential to deal with the scene by using optical flow in the frontend.
The tightly-coupled Lidar-Visual-Inertial could be a solution. For example LVI-SAM but it needs several modifications to test the UrbanNav. For example, the extrinsic and the deep association is different, our LiDAR's y-axis is pointing forward. https://github.com/TixiaoShan/LVI-SAM/blob/8b789589cc3d392813cc8226f59d656b443c3608/src/visual_odometry/visual_feature/feature_tracker_node.cpp#L268
Do you mind sharing your config and modification such that we can check if any parameters can be fine-tuned?
For LVI-SAM i used the following camera parameter: { %YAML:1.0
project_name: "lvi_sam"
imu_topic: "/imu_data" image_topic: "/zed2/camera/left/image_raw" point_cloud_topic: "lvi_sam/lidar/deskew/cloud_deskewed"
use_lidar: 1 # whether use depth info from lidar or not lidar_skip: 3 # skip this amount of scans align_camera_lidar_estimation: 1 # align camera and lidar estimation for visualization
lidar_to_cam_tx: 0.0856804 lidar_to_cam_ty: -0.200976 lidar_to_cam_tz: -0.1304 lidar_to_cam_rx: 1.58412 lidar_to_cam_ry: 0.0229253 lidar_to_cam_rz: 0.0171762
model_type: PINHOLE camera_name: camera
image_width: 672 image_height: 376
distortion_parameters: k1: -0.0442856 k2: 0.0133574 p1: 0 p2: 0 projection_parameters: fx: 264.9425 fy: 264.79 cx: 334.3975 cy: 183.162 fisheye_mask: "/config/fisheye_mask_720x540.jpg"
acc_n: 1.1197412605492375e-02 # accelerometer measurement noise standard deviation. #0.2 0.04 gyr_n: 1.0270904839480961e-02 # gyroscope measurement noise standard deviation. #0.05 0.004 acc_w: 1.1751767903346351e-04 # accelerometer bias random work noise standard deviation. #0.02 gyr_w: 9.1355383994881894e-05 # gyroscope bias random work noise standard deviation. #4.0e-5 g_norm: 9.78785 # gravity magnitude
estimate_extrinsic: 0 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
# 2 Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning.
extrinsicRotation: !!opencv-matrix rows: 3 cols: 3 dt: d data: [ 0.99958975976322017672, 0.017170804625958683,-0.022923255549840760448, 0.0231465893493554992, -0.01292276982900426,0.99964855695461152696, 0.016868538110887141801, -0.99976905607667321087,-0.013314913952324877203]
extrinsicTranslation: !!opencv-matrix rows: 3 cols: 1 dt: d data: [-0.0851835233537896739,0.1257734831824673213,0.075888549707280005484]
max_cnt: 150 # max feature number in feature tracking min_dist: 20 # min distance between two features freq: 20 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image F_threshold: 1.0 # ransac threshold (pixel) show_track: 1 # publish tracking image as topic equalize: 1 # if image is too dark or light, trun on equalize to find enough features fisheye: 0 # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points
max_solver_time: 0.035 # max solver itration time (ms), to guarantee real time max_num_iterations: 10 # max solver itrations, to guarantee real time keyframe_parallax: 10.0 # keyframe selection threshold (pixel)
estimate_td: 0 # online estimate time offset between camera and imu td: 0 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)
rolling_shutter: 0 # 0: global shutter camera, 1: rolling shutter camera rolling_shutter_tr: 0 # unit: s. rolling shutter read out time per frame (from data sheet).
loop_closure: 1 # start loop closure skip_time: 0.0 skip_dist: 0.0 debug_image: 0 # save raw image in loop detector for visualization prupose; you can close this function by setting 0 match_image_scale: 0.5 vocabulary_file: "/config/brief_k10L6.bin" brief_pattern_file: "/config/brief_pattern.yml" } ////////////////////////////////////////////////////////////////// And for lidar parameter:
PROJECT_NAME: "lvi_sam"
lvi_sam:
pointCloudTopic: "/velodyne_points" # Point cloud data imuTopic: "/imu/data" # IMU data
useImuHeadingInitialization: true # if using GPS data, set to "true"
savePCD: false # https://github.com/TixiaoShan/LIO-SAM/issues/3 savePCDDirectory: "/Downloads/LOAM/" # in your home folder, starts and ends with "/". Warning: the code deletes "LOAM" folder then recreates it. See "mapOptimization" for implementation
N_SCAN: 32 # number of lidar channel (i.e., 16, 32, 64, 128) Horizon_SCAN: 1800 # lidar horizontal resolution (Velodyne:1800, Ouster:512,1024,2048) timeField: "time" # point timestamp field, Velodyne - "time", Ouster - "t" downsampleRate: 1 # default: 1. Downsample your data if too many points. i.e., 16 = 64 / 4, 16 = 16 / 1
imuAccNoise: 1.1197412605492375e-02 imuGyrNoise: 1.0270904839480961e-02 imuAccBiasN: 1.1751767903346351e-04 imuGyrBiasN: 9.1355383994881894e-05 imuGravity: 9.78785
extrinsicTrans: [0.0, 0.0, 0.28] extrinsicRot: [ 1, 0, 0, 0, 1, 0, 0, 0, 1] extrinsicRPY: [0, 1, 0, -1, 0, 0, 0, 0, 1]
edgeThreshold: 1.0 surfThreshold: 0.1 edgeFeatureMinValidNum: 10 surfFeatureMinValidNum: 100
odometrySurfLeafSize: 0.4 # default: 0.4 mappingCornerLeafSize: 0.2 # default: 0.2 mappingSurfLeafSize: 0.4 # default: 0.4
z_tollerance: 1000 # meters rotation_tollerance: 1000 # radians
numberOfCores: 4 # number of cores for mapping optimization mappingProcessInterval: 0.15 # seconds, regulate mapping frequency
surroundingkeyframeAddingDistThreshold: 1.0 # meters, regulate keyframe adding threshold
surroundingkeyframeAddingAngleThreshold: 0.2 # radians, regulate keyframe adding threshold
surroundingKeyframeDensity: 2.0 # meters, downsample surrounding keyframe poses
surroundingKeyframeSearchRadius: 50.0 # meters, within n meters scan-to-map optimization (when loop closure disabled)
loopClosureEnableFlag: true surroundingKeyframeSize: 25 # submap size (when loop closure enabled) historyKeyframeSearchRadius: 20.0 # meters, key frame that is within n meters from current pose will be considerd for loop closure historyKeyframeSearchTimeDiff: 30.0 # seconds, key frame that is n seconds older will be considered for loop closure historyKeyframeSearchNum: 25 # number of hostory key frames will be fused into a submap for loop closure historyKeyframeFitnessScore: 0.3 # icp threshold, the smaller the better alignment
globalMapVisualizationSearchRadius: 1000.0 # meters, global map visualization radius globalMapVisualizationPoseDensity: 10.0 # meters, global map visualization keyframe density globalMapVisualizationLeafSize: 1.0 # meters, global map visualization cloud density
I consider that you have a typo in imu_topic: "/imu_data" of the camera config, the imu topic in UrbanNav should be /imu/data.
I am confused about if the above parameters are LIO-SAM's or LVI-SAM's because it seems like LIO-SAM but the project name shows LVI-SAM. I tested on the tunnel dataset with LIO-SAM. It really drifted and crashed from the very beginning.
Yes, the performance of LiDAR slam is very challenging in tunnel data. For the evaluation of the tunnel, you can refer to Fig.11 of the paper "A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR" https://www.mdpi.com/2072-4292/14/12/2835
I ran the UrbanNav-HK-Tunnel-1 with LIO-SAM, which was no doubt crashed because of less structure features I tried it again with LVI-SAM, which fused Visual-Odometry, but it also crashed I wondered if there were any good methods to deal with this scenes?