HKUST-Aerial-Robotics / GVINS

Tightly coupled GNSS-Visual-Inertial system for locally smooth and globally consistent state estimation in complex environment.
GNU General Public License v3.0
893 stars 236 forks source link

Not enough features or parallax; Move device around [VINS-Fusion works but not GVINS] #48

Open marleyshan21 opened 1 year ago

marleyshan21 commented 1 year ago

Hey all! I have been trying out GVINS on my handheld devices (ZED2i stereo - rolling shutter, OAK-D - global shutter).

I keep getting the following error: [ INFO] [1670628907.037916833]: Not enough features or parallax; Move device around

I have tried with both devices and i get the following error when running in pure VI mode. I initially thought it could be an issue with the camera+IMU calibration. But with the same parameters, I did the calibration using the kalibr package and was able to run ORB SLAM3 and VINS-Fusion (stereo + IMU) for both cameras. So, I am unsure about what could be the issue.

The config files have been set properly. I am running GVINS using just one of the cameras in both ZED and OAK-D (since it's mono mode).

The only difference I see in the config file is the declaration of extrinsic parameters. In VINS-Fusion - for my ZED camera

# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
                        # 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.

body_T_cam0: !!opencv-matrix
   rows: 4
   cols: 4
   dt: d
   data:  [ 0.0051989,  -0.00475195,  0.99997519,  0.01782353,
           -0.99998009, -0.00360067,  0.00518181,  0.02405353,
            0.00357595, -0.99998223, -0.00477058,  0.00652621,
            0.,          0.,          0.,          1.        ]

body_T_cam1: !!opencv-matrix
   rows: 4
   cols: 4
   dt: d
   data: [0.01907386, -0.00099337, 0.99981758, 0.01775441,
           -0.9998151, -0.00245767, 0.01907137, -0.097972 ,
          0.00243827, -0.99999649, -0.00104007, 0.00694309,
          0, 0, 0, 1]

#Multiple thread support
multiple_thread: 1

#feature traker paprameters
max_cnt: 150            # max feature number in feature tracking
min_dist: 30            # min distance between two features 
freq: 0              # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image 
F_threshold: 1.0        # ransac threshold (pixel)
show_track: 1           # publish tracking image as topic
flow_back: 1            # perform forward and backward optical flow to improve feature tracking accuracy

#optimization parameters
max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
max_num_iterations: 8   # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)

#imu parameters       The more accurate parameters you provide, the better performance
acc_n: 0.0015999999595806003       # accelerometer measurement noise standard deviation. 
gyr_n: 0.007000000216066837         # gyroscope measurement noise standard deviation.     
acc_w: 0.0002508999896235764      # accelerometer bias random work noise standard deviation.  
gyr_w: 0.0019474000437185168       # gyroscope bias random work noise standard deviation.     
g_norm: 9.81007     # gravity magnitude

#unsynchronization parameters
estimate_td: 0                      # online estimate time offset between camera and imu
td: 0.0         

while in GVINS it is

# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 1   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
                        # 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
                        # 2  Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning.                        
#If you choose 0 or 1, you should write down the following matrix.
#Rotation from camera frame to imu frame, imu^R_cam
extrinsicRotation: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data:   [0.0051989,  -0.00475195,  0.99997519,
         -0.99998009, -0.00360067,  0.00518181, 
         0.00357595, -0.99998223, -0.00477058]

#Translation from camera frame to imu frame, imu^T_cam
extrinsicTranslation: !!opencv-matrix
   rows: 3
   cols: 1
   dt: d
   data: [0.01782353, 0.02405353, 0.00652621]

#feature traker paprameters
max_cnt: 150           # max feature number in feature tracking
min_dist: 30           # min distance between two features 
freq: 10                # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image 
F_threshold: 1.0        # ransac threshold (pixel)
show_track: 1           # publish tracking image as topic
equalize: 1             # if image is too dark or light, trun on equalize to find enough features
fisheye: 0              # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points

#optimization parameters
max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
max_num_iterations: 8   # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)

#imu parameters       The more accurate parameters you provide, the better performance
acc_n: 0.0015999999595806003       # accelerometer measurement noise standard deviation. 
gyr_n: 0.007000000216066837         # gyroscope measurement noise standard deviation.     
acc_w: 0.0002508999896235764      # accelerometer bias random work noise standard deviation.  
gyr_w: 0.0019474000437185168       # gyroscope bias random work noise standard deviation.     
g_norm: 9.81007     # gravity magnitude

#unsynchronization parameters
estimate_td: 0                      # online estimate time offset between camera and imu
td: 0.0                             # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)`

Could you help figure out what could be going wrong? I am using the distortion parameters of camera 0 in GVINS.

Any ideas? @shaojie @dvorak0 @WayneTimer @groundmelon @hlx1996

Thanks!!