rpng / open_vins

An open source platform for visual-inertial navigation research.
https://docs.openvins.com
GNU General Public License v3.0
2.18k stars 643 forks source link

ov_msckf run off at start. Possible IMU issue #160

Closed rakshith95 closed 3 years ago

rakshith95 commented 3 years ago

Hello, I'm trying to run open vins with topics input from a gazebo simulation envt, using the ros_subscribe_msckf. My issue is similar to #157, in that the odometry seems to completely go bonkers soon after the beginning. In that, it's mentioned that "You will likely need to tune the init_window_time and init_imu_thresh" I have tuned my init_imu_thresh parameter, such that it initializes right as the drone, takes off the filter is initialized. However, I am not sure what the init_window_time parameter actually does and how it must be tuned.

`

/ [0.0,0,9.81] [752, 480] [227.4010064226358,227.35879407313246,375.5302935901654,239.4881944649193] [0.019265981371039506, 0.0011428473998276235, -0.0003811659324868097, 6.340084698783884e-05] [ 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 ]

`

I have posted the launch file I am using. Could you please let me know if I should make any other changes?

Thank you

goldbattle commented 3 years ago

Are you able to pose a bag file (via google drive or other service)? The window is used to calculate the direction of gravity, and 0.75-1 second is what I normally use. If you change this then you will need to re-tune, but you shouldn't need to tune per-dataset in general.

I also suggest you try using the default ETH imu noise values before trying your own. Additionally, calibrate with Kalibr to ensure success (I see that your camera imu extrinsic is just the identity which can never be the case outside of simulation).

On Tue, Apr 13, 2021 at 7:25 PM rakshith95 @.***> wrote:

Hello, I'm trying to run open vins with topics input from a gazebo simulation envt, using the ros_subscribe_msckf. My issue is similar to #157 https://github.com/rpng/open_vins/issues/157, in that the odometry seems to completely go bonkers soon after the beginning. In that, it's mentioned that "You will likely need to tune the init_window_time and init_imu_thresh" I have tuned my init_imu_thresh parameter, such that it initializes right as the drone, takes off the filter is initialized. However, I am not sure what the init_window_time parameter actually does and how it must be tuned.

`

/ [0.0,0,9.81] [752, 480] [227.4010064226358,227.35879407313246,375.5302935901654,239.4881944649193] [0.019265981371039506, 0.0011428473998276235, -0.0003811659324868097, 6.340084698783884e-05] [ 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 ]

`

I have posted the launch file I am using. Could you please let me know if I should make any other changes?

Thank you

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/rpng/open_vins/issues/160, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAQ6TYTWQCTSY5TXA4ZCHWDTITHG7ANCNFSM424IJKFA .

rakshith95 commented 3 years ago

Hello, Thank you for the suggestions @goldbattle . Here is the link for a bagfile I recorded now where the drone takes off from a starting position, and moves a bit.

I tried values of the init_window from 0.75-1 but none worked well, which is why I thought I'd try different values. I did try the default ETH IMU noise values, but the same was happening. I am basing these values, from a config file I was using for VINS-Mono which worked well. I will post that as well for your reference:

%YAML:1.0

#common parameters
imu_topic: "/uav1/vio/imu"
image_topic: "/uav1/vio/camera/image_raw"
output_path: "/home/honzabednar/git/vins-mono/output/"

model_type: KANNALA_BRANDT
camera_name: camera
image_width: 752
image_height: 480
projection_parameters:
   k2: 0.019265981371039506
   k3: 0.0011428473998276235
   k4: -0.0003811659324868097
   k5: 6.340084698783884e-05 
   mu: 227.4010064226358
   mv: 227.35879407313246
   u0: 375.5302935901654
   v0: 239.4881944649193

# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
                        # 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
                        # 2  Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning.                        
#If you choose 0 or 1, you should write down the following matrix.
#Rotation from camera frame to imu frame, imu^R_cam

# REALSENSE FRONT
# rosrun tf tf_echo uav1/fcu uav1/rs_d435/infra1_optical
#extrinsicRotation: !!opencv-matrix
#   rows: 3
#   cols: 3
#   dt: d
#   data: [0.0, 0.0, 1.0,
#         -1.0, 0.0, 0.0,
#          0.0,-1.0, 0.0]
##Translation from camera frame to imu frame, imu^T_cam
#extrinsicTranslation: !!opencv-matrix
#   rows: 3
#   cols: 1
#   dt: d
#   data: [0.155, 0.018,-0.089]

# REALSENSE DOWN
#extrinsicRotation: !!opencv-matrix
#   rows: 3
#   cols: 3
#   dt: d
#   data: [0.0,-1.0, 0.0,
#         -1.0, 0.0, 0.0,
#          0.0, 0.0,-1.0]
##Translation from camera frame to imu frame, imu^T_cam
#extrinsicTranslation: !!opencv-matrix
#   rows: 3
#   cols: 1
#   dt: d
#   data: [0.05, 0.0, -0.093]

# BLUEFOX FISHEYE
extrinsicRotation: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [1.0, 0.0, 0.0,
          0.0, 1.0, 0.0,
          0.0, 0.0, 1.0]
#Translation from camera frame to imu frame, imu^T_cam
extrinsicTranslation: !!opencv-matrix
   rows: 3
   cols: 1
   dt: d
   data: [0.0, 0.0, 0.0]

#Multiple thread support
multiple_thread: 1

#feature traker paprameters
max_cnt: 100            # max feature number in feature tracking
min_dist: 30            # min distance between two features 
freq: 10                # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image 
F_threshold: 1.0        # ransac threshold (pixel)
show_track: 1           # publish tracking image as topic
flow_back: 0            # perform forward and backward optical flow to improve feature tracking accuracy
equalize: 0             # if image is too dark or light, trun on equalize to find enough features
fisheye: 1              # if using fisheye, turn on it. A fisheye_mask_name has to be specified and placed into /config folder
fisheye_mask_name: "fisheye_mask_752x480.jpg"              # name of the mask file 

#optimization parameters
max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
max_num_iterations: 10   # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)

#imu parameters       The more accurate parameters you provide, the better performance
acc_n: 0.1          # accelerometer measurement noise standard deviation. #0.2   0.04
gyr_n: 0.3         # gyroscope measurement noise standard deviation.     #0.05  0.004
acc_w: 0.001         # accelerometer bias random work noise standard deviation.  #0.02
gyr_w: 0.001       # gyroscope bias random work noise standard deviation.     #4.0e-5
g_norm: 9.81007     # gravity magnitude

#loop closure parameters
loop_closure: 0                    # start loop closure
load_previous_pose_graph: 0        # load and reuse previous pose graph; load from 'pose_graph_save_path'
fast_relocalization: 1            # useful in real-time and large project
pose_graph_save_path: "/home/honzabednar/git/vins-mono/output/pose_graph/" # save and load path

#unsynchronization parameters
estimate_td: 1                      # online estimate time offset between camera and imu
td: 0.0                             # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)

#rolling shutter parameters
rolling_shutter: 0                  # 0: global shutter camera, 1: rolling shutter camera
rolling_shutter_tr: 0               # unit: s. rolling shutter read out time per frame (from data sheet). 

#visualization parameters
save_image: 0                   # save image in pose graph for visualization prupose; you can close this function by setting 0 
visualize_imu_forward: 1        # output imu forward propogation to achieve low latency and high frequence results
visualize_camera_size: 0.4      # size of camera marker in RVIZ

Since I am testing it completely in the simulation environment for now, I thought the extrinsic calibration of Identity, should be fine.

P.S. You can see there is a fisheye mask image being applied in VINS-Mono. There was a recent issue raised where you mention that it is not supported in Open-VINS yet, so I changed the ros_subscribe_msckf.cpp code to apply the mask image, and that improves things a little but it still ends up 'running away'

rakshith95 commented 3 years ago

Hi @goldbattle. Do you have any suggestions after seeing the bag file?

goldbattle commented 3 years ago

Sorry I have not had time, hopefully I can take a look this weekend / next week when I find some time. Thanks again for posting it.

On Thu, Apr 15, 2021 at 5:29 PM rakshith95 @.***> wrote:

Hi @goldbattle https://github.com/goldbattle. Do you have any suggestions after seeing the bag file?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/rpng/open_vins/issues/160#issuecomment-820743306, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAQ6TYX2H3JOB3D4LLJH2FTTI5LFFANCNFSM424IJKFA .

rakshith95 commented 3 years ago

Sure, thank you!

rakshith95 commented 3 years ago

Hello @goldbattle, sorry to bother you, but did you have a chance to check this?

goldbattle commented 3 years ago

It looks like the main issue is that the platform stops part way through the dataset which is a common problem for all VIO systems. You can experiment with this launch file: https://gist.github.com/goldbattle/165055dab7e2c79c87a9e4392f0c3744

rakshith95 commented 3 years ago

Hello @goldbattle , thanks for getting back on this. I will try it with the launch file provided. However, I'm not sure what you mean by " the main issue is that the platform stops part way through the dataset ". Could you expand more on that?

goldbattle commented 3 years ago

You should experiment with enabling the zero velocity update to handle standstill. When the platform is stationary this is a degenerate motion for any VO / VIO platform. For example if you are stationary, are you able to triangulate 3d features in the environment?

rakshith95 commented 3 years ago

Okay, that makes sense thank you. I'm closing the issue for now since, with the launch file you've provided, it works without the run off, if I disable the zero velocity update.