IPNL-POLYU / UrbanNavDataset

UrbanNav:An Open-sourced Multisensory Dataset for Benchmarking Positioning Algorithms Designed for Urban Areas
413 stars 68 forks source link

Tunnel Dataset crashed #3

Closed tigerschweiger closed 3 years ago

tigerschweiger commented 3 years ago

I ran the UrbanNav-HK-Tunnel-1 with LIO-SAM, which was no doubt crashed because of less structure features 2021-10-19 17-05-36 的屏幕截图 I tried it again with LVI-SAM, which fused Visual-Odometry, but it also crashed 2021-10-19 17-06-06 的屏幕截图 I wondered if there were any good methods to deal with this scenes?

DarrenWong commented 3 years ago

@tigerschweiger yes, the feature-less tunnel is very hard for LiDAR-inertial methods such as LIO-SAM. The Vins-fusion we tested also failed in such scenarios but has the potential to deal with the scene by using optical flow in the frontend.

The tightly-coupled Lidar-Visual-Inertial could be a solution. For example LVI-SAM but it needs several modifications to test the UrbanNav. For example, the extrinsic and the deep association is different, our LiDAR's y-axis is pointing forward. https://github.com/TixiaoShan/LVI-SAM/blob/8b789589cc3d392813cc8226f59d656b443c3608/src/visual_odometry/visual_feature/feature_tracker_node.cpp#L268

Do you mind sharing your config and modification such that we can check if any parameters can be fine-tuned?

tigerschweiger commented 3 years ago

For LVI-SAM i used the following camera parameter: { %YAML:1.0

Project

project_name: "lvi_sam"

common parameters

imu_topic: "/imu_data" image_topic: "/zed2/camera/left/image_raw" point_cloud_topic: "lvi_sam/lidar/deskew/cloud_deskewed"

Lidar Params

use_lidar: 1 # whether use depth info from lidar or not lidar_skip: 3 # skip this amount of scans align_camera_lidar_estimation: 1 # align camera and lidar estimation for visualization

lidar to camera extrinsic

if it is not right, we should tune the parameters

lidar_to_cam_tx: 0.0856804 lidar_to_cam_ty: -0.200976 lidar_to_cam_tz: -0.1304 lidar_to_cam_rx: 1.58412 lidar_to_cam_ry: 0.0229253 lidar_to_cam_rz: 0.0171762

camera model

model_type: PINHOLE camera_name: camera

Mono camera config

image_width: 672 image_height: 376

mirror_parameters:

xi: 1.9926618269451453

distortion_parameters: k1: -0.0442856 k2: 0.0133574 p1: 0 p2: 0 projection_parameters: fx: 264.9425 fy: 264.79 cx: 334.3975 cy: 183.162 fisheye_mask: "/config/fisheye_mask_720x540.jpg"

imu parameters The more accurate parameters you provide, the worse performance

acc_n: 1.1197412605492375e-02 # accelerometer measurement noise standard deviation. #0.2 0.04 gyr_n: 1.0270904839480961e-02 # gyroscope measurement noise standard deviation. #0.05 0.004 acc_w: 1.1751767903346351e-04 # accelerometer bias random work noise standard deviation. #0.02 gyr_w: 9.1355383994881894e-05 # gyroscope bias random work noise standard deviation. #4.0e-5 g_norm: 9.78785 # gravity magnitude

Extrinsic parameter between IMU and Camera.

estimate_extrinsic: 0 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.

1 Have an initial guess about extrinsic parameters. We will optimize around your initial guess.

                    # 2  Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning. 

Rotation from camera frame to imu frame, imu^R_cam

extrinsicRotation: !!opencv-matrix rows: 3 cols: 3 dt: d data: [ 0.99958975976322017672, 0.017170804625958683,-0.022923255549840760448, 0.0231465893493554992, -0.01292276982900426,0.99964855695461152696, 0.016868538110887141801, -0.99976905607667321087,-0.013314913952324877203]

Translation from camera frame to imu frame, imu^T_cam

extrinsicTranslation: !!opencv-matrix rows: 3 cols: 1 dt: d data: [-0.0851835233537896739,0.1257734831824673213,0.075888549707280005484]

feature traker paprameters

max_cnt: 150 # max feature number in feature tracking min_dist: 20 # min distance between two features freq: 20 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image F_threshold: 1.0 # ransac threshold (pixel) show_track: 1 # publish tracking image as topic equalize: 1 # if image is too dark or light, trun on equalize to find enough features fisheye: 0 # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points

optimization parameters

max_solver_time: 0.035 # max solver itration time (ms), to guarantee real time max_num_iterations: 10 # max solver itrations, to guarantee real time keyframe_parallax: 10.0 # keyframe selection threshold (pixel)

unsynchronization parameters

estimate_td: 0 # online estimate time offset between camera and imu td: 0 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)

rolling shutter parameters

rolling_shutter: 0 # 0: global shutter camera, 1: rolling shutter camera rolling_shutter_tr: 0 # unit: s. rolling shutter read out time per frame (from data sheet).

loop closure parameters

loop_closure: 1 # start loop closure skip_time: 0.0 skip_dist: 0.0 debug_image: 0 # save raw image in loop detector for visualization prupose; you can close this function by setting 0 match_image_scale: 0.5 vocabulary_file: "/config/brief_k10L6.bin" brief_pattern_file: "/config/brief_pattern.yml" } ////////////////////////////////////////////////////////////////// And for lidar parameter:

project name

PROJECT_NAME: "lvi_sam"

lvi_sam:

Topics

pointCloudTopic: "/velodyne_points" # Point cloud data imuTopic: "/imu/data" # IMU data

Heading

useImuHeadingInitialization: true # if using GPS data, set to "true"

Export settings

savePCD: false # https://github.com/TixiaoShan/LIO-SAM/issues/3 savePCDDirectory: "/Downloads/LOAM/" # in your home folder, starts and ends with "/". Warning: the code deletes "LOAM" folder then recreates it. See "mapOptimization" for implementation

Sensor Settings

N_SCAN: 32 # number of lidar channel (i.e., 16, 32, 64, 128) Horizon_SCAN: 1800 # lidar horizontal resolution (Velodyne:1800, Ouster:512,1024,2048) timeField: "time" # point timestamp field, Velodyne - "time", Ouster - "t" downsampleRate: 1 # default: 1. Downsample your data if too many points. i.e., 16 = 64 / 4, 16 = 16 / 1

IMU Settings

imuAccNoise: 1.1197412605492375e-02 imuGyrNoise: 1.0270904839480961e-02 imuAccBiasN: 1.1751767903346351e-04 imuGyrBiasN: 9.1355383994881894e-05 imuGravity: 9.78785

Extrinsics (lidar -> IMU)

extrinsicTrans: [0.0, 0.0, 0.28] extrinsicRot: [ 1, 0, 0, 0, 1, 0, 0, 0, 1] extrinsicRPY: [0, 1, 0, -1, 0, 0, 0, 0, 1]

LOAM feature threshold

edgeThreshold: 1.0 surfThreshold: 0.1 edgeFeatureMinValidNum: 10 surfFeatureMinValidNum: 100

voxel filter paprams

odometrySurfLeafSize: 0.4 # default: 0.4 mappingCornerLeafSize: 0.2 # default: 0.2 mappingSurfLeafSize: 0.4 # default: 0.4

robot motion constraint (in case you are using a 2D robot)

z_tollerance: 1000 # meters rotation_tollerance: 1000 # radians

CPU Params

numberOfCores: 4 # number of cores for mapping optimization mappingProcessInterval: 0.15 # seconds, regulate mapping frequency

Surrounding map

surroundingkeyframeAddingDistThreshold: 1.0 # meters, regulate keyframe adding threshold surroundingkeyframeAddingAngleThreshold: 0.2 # radians, regulate keyframe adding threshold surroundingKeyframeDensity: 2.0 # meters, downsample surrounding keyframe poses
surroundingKeyframeSearchRadius: 50.0 # meters, within n meters scan-to-map optimization (when loop closure disabled)

Loop closure

loopClosureEnableFlag: true surroundingKeyframeSize: 25 # submap size (when loop closure enabled) historyKeyframeSearchRadius: 20.0 # meters, key frame that is within n meters from current pose will be considerd for loop closure historyKeyframeSearchTimeDiff: 30.0 # seconds, key frame that is n seconds older will be considered for loop closure historyKeyframeSearchNum: 25 # number of hostory key frames will be fused into a submap for loop closure historyKeyframeFitnessScore: 0.3 # icp threshold, the smaller the better alignment

Visualization

globalMapVisualizationSearchRadius: 1000.0 # meters, global map visualization radius globalMapVisualizationPoseDensity: 10.0 # meters, global map visualization keyframe density globalMapVisualizationLeafSize: 1.0 # meters, global map visualization cloud density

DarrenWong commented 3 years ago

I consider that you have a typo in imu_topic: "/imu_data" of the camera config, the imu topic in UrbanNav should be /imu/data. image

yinhaosen commented 1 year ago

I am confused about if the above parameters are LIO-SAM's or LVI-SAM's because it seems like LIO-SAM but the project name shows LVI-SAM. I tested on the tunnel dataset with LIO-SAM. It really drifted and crashed from the very beginning.

DarrenWong commented 1 year ago

Yes, the performance of LiDAR slam is very challenging in tunnel data. For the evaluation of the tunnel, you can refer to Fig.11 of the paper "A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR" https://www.mdpi.com/2072-4292/14/12/2835