rpng / open_vins

An open source platform for visual-inertial navigation research.
https://docs.openvins.com
GNU General Public License v3.0
2.17k stars 636 forks source link

Does anyone run Hilti SLAM challenge dataset #268

Closed zqlee-ronghui closed 1 year ago

zqlee-ronghui commented 2 years ago

Thanks for this wonderful project! I have used OpenVins on Hilti SLAM challenge dataset 2021 with mono mode, but the pose goes to infinity quickly. here is the website Hilti SLAM Challenge Dataset

following are some configs I have used with

estimator_config.yaml ``` %YAML:1.0 # need to specify the file type at the top! verbosity: "INFO" # ALL, DEBUG, INFO, WARNING, ERROR, SILENT use_fej: true # if first-estimate Jacobians should be used (enable for good consistency) use_imuavg: true # if using discrete integration, if we should average sequential IMU measurements to "smooth" it use_rk4int: true # if rk4 integration should be used (overrides imu averaging) use_stereo: false # if we have more than 1 camera, if we should try to track stereo constraints between pairs max_cameras: 1 # how many cameras we have 1 = mono, 2 = stereo, >2 = binocular (all mono tracking) calib_cam_extrinsics: true # if the transform between camera and IMU should be optimized R_ItoC, p_CinI calib_cam_intrinsics: true # if camera intrinsics should be optimized (focal, center, distortion) calib_cam_timeoffset: true # if timeoffset between camera and IMU should be optimized max_clones: 20 # how many clones in the sliding window max_slam: 100 # number of features in our state vector max_slam_in_update: 25 # update can be split into sequential updates of batches, how many in a batch max_msckf_in_update: 40 # how many MSCKF features to use in the update dt_slam_delay: 1 # delay before initializing (helps with stability from bad initialization...) gravity_mag: 9.81 # magnitude of gravity in this location feat_rep_msckf: "ANCHORED_MSCKF_INVERSE_DEPTH" feat_rep_slam: "ANCHORED_MSCKF_INVERSE_DEPTH" feat_rep_aruco: "ANCHORED_MSCKF_INVERSE_DEPTH" # zero velocity update parameters we can use # we support either IMU-based or disparity detection. try_zupt: false zupt_chi2_multipler: 0 # set to 0 for only disp-based zupt_max_velocity: 0.1 zupt_noise_multiplier: 50 zupt_max_disparity: 1.5 # set to 0 for only imu-based zupt_only_at_beginning: true # ================================================================== # ================================================================== init_window_time: 0.5 # how many seconds to collect initialization information init_imu_thresh: 1.5 # threshold for variance of the accelerometer to detect a "jerk" in motion init_max_disparity: 10.0 # max disparity to consider the platform stationary (dependent on resolution) init_max_features: 75 # how many features to track during initialization (saves on computation) init_dyn_use: false # if dynamic initialization should be used init_dyn_mle_opt_calib: false # if we should optimize calibration during intialization (not recommended) init_dyn_mle_max_iter: 50 # how many iterations the MLE refinement should use (zero to skip the MLE) init_dyn_mle_max_time: 0.05 # how many seconds the MLE should be completed in init_dyn_mle_max_threads: 6 # how many threads the MLE should use init_dyn_num_pose: 6 # number of poses to use within our window time (evenly spaced) init_dyn_min_deg: 10.0 # orientation change needed to try to init init_dyn_inflation_ori: 10 # what to inflate the recovered q_GtoI covariance by init_dyn_inflation_vel: 100 # what to inflate the recovered v_IinG covariance by init_dyn_inflation_bg: 10 # what to inflate the recovered bias_g covariance by init_dyn_inflation_ba: 100 # what to inflate the recovered bias_a covariance by init_dyn_min_rec_cond: 1e-12 # reciprocal condition number thresh for info inversion init_dyn_bias_g: [ 0.0, 0.0, 0.0 ] # initial gyroscope bias guess init_dyn_bias_a: [ 0.0, 0.0, 0.0 ] # initial accelerometer bias guess # ================================================================== # ================================================================== record_timing_information: false # if we want to record timing information of the method record_timing_filepath: "/tmp/traj_timing.txt" # https://docs.openvins.com/eval-timing.html#eval-ov-timing-flame # if we want to save the simulation state and its diagional covariance # use this with rosrun ov_eval error_simulation save_total_state: false filepath_est: "/tmp/ov_estimate.txt" filepath_std: "/tmp/ov_estimate_std.txt" filepath_gt: "/tmp/ov_groundtruth.txt" # ================================================================== # ================================================================== # our front-end feature tracking parameters # we have a KLT and descriptor based (KLT is better implemented...) use_klt: true # if true we will use KLT, otherwise use a ORB descriptor + robust matching num_pts: 200 # number of points (per camera) we will extract and try to track fast_threshold: 20 # threshold for fast extraction (warning: lower threshs can be expensive) grid_x: 5 # extraction sub-grid count for horizontal direction (uniform tracking) grid_y: 5 # extraction sub-grid count for vertical direction (uniform tracking) min_px_dist: 10 # distance between features (features near each other provide less information) knn_ratio: 0.70 # descriptor knn threshold for the top two descriptor matches track_frequency: 21.0 # frequency we will perform feature tracking at (in frames per second / hertz) downsample_cameras: false # will downsample image in half if true num_opencv_threads: 4 # -1: auto, 0-1: serial, >1: number of threads histogram_method: "HISTOGRAM" # NONE, HISTOGRAM, CLAHE # aruco tag tracker for the system # DICT_6X6_1000 from https://chev.me/arucogen/ use_aruco: false num_aruco: 1024 downsize_aruco: true # ================================================================== # ================================================================== # camera noises and chi-squared threshold multipliers up_msckf_sigma_px: 1 up_msckf_chi2_multipler: 1 up_slam_sigma_px: 1 up_slam_chi2_multipler: 1 up_aruco_sigma_px: 1 up_aruco_chi2_multipler: 1 # masks for our images use_mask: false # imu and camera spacial-temporal # imu config should also have the correct noise values relative_config_imu: "kalibr_imu_chain.yaml" relative_config_imucam: "kalibr_imucam_chain.yaml" ```
kalibr_imucam_chain.yaml ``` %YAML:1.0 cam0: T_imu_cam: #rotation from camera to IMU R_CtoI, position of camera in IMU p_CinI - [-0.00282148, -0.00307476, 0.999991,0.0506783] - [-0.999995, -0.00103727, -0.00282468, 0.0458784] - [0.00104595, -0.999995, -0.00307182, -0.00594365] - [0.0, 0.0, 0.0, 1.0] cam_overlaps: [0] camera_model: equidistant distortion_coeffs: [-0.0395909069, -0.0041727433, 0.0030288415, -0.0012784168] distortion_model: radtan intrinsics: [701.6682111281, 701.55526909, 703.6097253263, 530.4665279367] #fu, fv, cu, cv resolution: [1440, 1080] rostopic: /alphasense/cam0/image_raw ```
kalibr_imu_chain.yaml ``` %YAML:1.0 imu0: T_i_b: - [1.0, 0.0, 0.0, 0.0] - [0.0, 1.0, 0.0, 0.0] - [0.0, 0.0, 1.0, 0.0] - [0.0, 0.0, 0.0, 1.0] accelerometer_noise_density: 2.0000e-3 # [ m / s^2 / sqrt(Hz) ] ( accel "white noise" ) accelerometer_random_walk: 3.0000e-3 # [ m / s^3 / sqrt(Hz) ]. ( accel bias diffusion ) gyroscope_noise_density: 1.6968e-04 # [ rad / s / sqrt(Hz) ] ( gyro "white noise" ) gyroscope_random_walk: 1.9393e-05 # [ rad / s^2 / sqrt(Hz) ] ( gyro bias diffusion ) model: calibrated rostopic: /alphasense/imu time_offset: 0.0 update_rate: 200.0 ```
 
  
goldbattle commented 2 years ago

The images used in the 2021 dataset are quite large, so I recommend downsizing them with downsample_cameras:true. I have not directly tried on this dataset. You might want to try the 2022 version of the dataset which has more cameras that you can choose from.