rpng / open_vins

An open source platform for visual-inertial navigation research.
https://docs.openvins.com
GNU General Public License v3.0
2.2k stars 652 forks source link

Color coding of features and application in space environment #165

Closed johanndiep closed 3 years ago

johanndiep commented 3 years ago

image

Dear OpenVINS community,

I am testing OpenVINS on planetary rover dataset and got some promising results after setting the correct visual intrinsic/extrinsic parameters, IMU noise and threshold as we as the pixel noise. The image showcase the stereo pair with the estimated path from rviz. Can you maybe clarify me on the color coding of the features, especially the difference between red and blue features in the optical flow images.

Furthermore in my project, I am trying to fuse OpenVINS with designated lunar satellite signals (similar to GNSS) in order to estimate the performance of a loosely-coupled design on the moon. Thereby, I want to use the OpenVINS pose and covariance output as a measurement for the loosely-coupled design. Is this approach feasible?

Thank you again for your contribution, I am having a lot of fun with your software.

goldbattle commented 3 years ago

Thanks for taking an interest in the project! The colors in on the image denote if a feature track has stereo or monocular information. Blue should mean it has a matching stereo track in the other image, while red is just a monocular track. There should also be green squares which denote the SLAM features which are continuously tracked for as long as possible.

~~What dataset are you using? The one I have seen is the MADMAX: https://onlinelibrary.wiley.com/doi/10.1002/rob.22016 https://datasets.arches-projekt.de/morocco2018/~~

Edit: Just realized you also opened #155, glad you were able to get it converted! Are you able to pose your scripts used?

You can do a loosely-coupled approach, and in the end it comes down to what you want. If you do got that route, make sure you checkout the ov_secondary project for an example: https://github.com/rpng/ov_secondary

johanndiep commented 3 years ago

Thanks for the response. I will check out this secondary repository for my work (to my understanding, its the same algorithm plus loop closing, meaning it would only be better).

Another question I might have is the initialization of the orientation at the beginning. Is there a possibility to set the orientation of the vision-frame in accordance with north-east-up for better comparison with RTK reference data?

As of potential dataset, I tested it on the following two:

Here is the launch file for the Katwijk dataset: `

<node name="run_serial_msckf" pkg="ov_msckf" type="run_serial_msckf" output="screen">

    <!-- bag topics -->
    <param name="topic_imu"      type="string" value="/imu0" />
    <param name="topic_camera0"  type="string" value="/cam0/image_raw" />
    <param name="topic_camera1"  type="string" value="/cam1/image_raw" />
    <rosparam param="stereo_pairs">[0,1]</rosparam>

    <!-- bag parameters -->
    <param name="path_bag"    type="string" value="/home/jdiep/datasets/Katwijk/1_1/part1.bag" />
    <param name="bag_start"   type="int"    value="0" />
    <param name="bag_durr"    type="int"    value="-1" />

    <!-- world/filter parameters -->
    <param name="max_clones"             type="int"    value="20" />
    <param name="max_slam"               type="int"    value="0" />
    <param name="max_cameras"            type="int"    value="2" />  <!-- change this value to 2 if using stereo vision -->
    <param name="init_window_time"       type="double" value="1" />
    <param name="init_imu_thresh"        type="double" value="0.35" />
    <rosparam param="gravity">[0.0,0.0,9.81]</rosparam>
    <param name="feat_rep_msckf"         type="string" value="GLOBAL_3D" />
    <param name="feat_rep_slam"          type="string" value="GLOBAL_3D" />

    <!-- tracker/extractor parameters -->
    <param name="use_klt"          type="bool"   value="true" />
    <param name="fast_threshold"   type="int"    value="10" />
    <param name="grid_x"           type="int"    value="5" />
    <param name="grid_y"           type="int"    value="3" />
    <param name="min_px_dist"      type="int"    value="10" />
    <param name="num_pts"          type="int"    value="400" />

    <!-- sensor noise values / update -->
    <param name="up_msckf_sigma_px"            type="double"   value="5" />
    <param name="up_msckf_chi2_multipler"      type="double"   value="1" />
    <param name="gyroscope_noise_density"      type="double"   value="0.007769550831721" />
    <param name="gyroscope_random_walk"        type="double"   value="1.000e-5" />
    <param name="accelerometer_noise_density"  type="double"   value="0.001845185465140" />
    <param name="accelerometer_random_walk"    type="double"   value="2.000e-3" />

    <!-- camera intrinsics -->
    <rosparam param="cam0_wh">[1024, 768]</rosparam>
    <rosparam param="cam1_wh">[1024, 768]</rosparam>
    <param name="cam0_is_fisheye" type="bool" value="false" />
    <param name="cam1_is_fisheye" type="bool" value="false" />
    <rosparam param="cam0_k">[8.342560029536488e+02,8.389607387334217e+02,4.977146425239835e+02,3.987726521928708e+02]</rosparam>
    <rosparam param="cam0_d">[-0.359945273497872,0.169265715456672,0,0]</rosparam>
    <rosparam param="cam1_k">[8.371285943173126e+02,8.408162958028008e+02,4.819380406691490e+02,3.914599724721840e+02]</rosparam>
    <rosparam param="cam1_d">[-0.353245255445656,0.156612963967655,0,0]</rosparam>

    <!-- camera extrinsics -->
    <rosparam param="T_C0toI">
        [
        -0.0004024, 0.3081081, -0.9513513, -0.382,
        0.9999999, 0.0000635, -0.0004024, -0.078,
        -0.0000635, -0.9513513, -0.3081081, 0.557,
        0.0, 0.0, 0.0, 1.0
        ]
    </rosparam>
    <rosparam param="T_C1toI">
        [
        -0.0004024, 0.3081081, -0.9513513, -0.382048288,
        0.9999999, 0.0000635, -0.0004024, 0.041999988,
        -0.0000635, -0.9513513, -0.3081081, 0.55699238,
        0.0, 0.0, 0.0, 1.0
        ]
    </rosparam>

</node>

<node type="rviz" name="rviz" pkg="rviz" args="-d $(find ov_msckf)/launch/display.rviz" />

`

It is noted that the hardest part was finding the IMU noises. While some parameters were hidden in datasheets of the IMU, others are just educated guesses. Furthermore, setting up_msckf_sigma_px to 5 eliminates the drift completely. Do you maybe know why this is? I will definitely dive deeper into the math of OpenVINS in order to understand the details for later research. Our focus in the team is primarily performance analysis of satellite navigation on the moon, so our CV knowledge is a little rusty :)

goldbattle commented 3 years ago

For your question of comparing to an RTK trajectory, I recommend you checkout the evaluation toolbox. You would align the two trajectories (unless you want to do that in realtime which I recommend you take a look at this paper as you would need to move enough to recover a orientation) and from there you could evaluate the quality etc.

As for the noises, they are very difficult, but tuning / selecting them is just a process that is required. At the core they define how much the estimator should trust the IMU vs the camera.

johanndiep commented 3 years ago

Thanks, I will check those out and keep you updated on this issue.