MAVIS-SLAM / OpenMAVIS

An open-source implementation of MAVIS-SLAM.
GNU General Public License v3.0
178 stars 24 forks source link

running openMAVIS with own multi-camera+IMU setup #7

Closed aimiliosnrobotics closed 1 month ago

aimiliosnrobotics commented 3 months ago

Hello and thank you very much for the nice work. I successfully managed to run openMAVIS and more specifically run it also in the ROS environment and get correct results. Now i wanted to try and run it with my own multicamera+IMU setup. The node subscribes for sure to the correct topics and I changed the HiltiChallenge2022.yaml file so that to include my setup. In my case i have 4 pinhole monocular cameras and one IMU. Here the camera parameters are changed as well as the transformations between IMU and the 3 cameras as well as the camera1 and camera2 transform. I included extra also the transformations IMU_camera2 and camera3_camera4 as they do have some overlapping region.

yaml file:

%YAML:1.0

#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------
File.version: "1.0"

Camera.type: "PinHole"

# Front Left Camera calibration and distortion parameters (OpenCV)
Camera1.fx: 262.921256278421
Camera1.fy: 325.9026632888596
Camera1.cx: 261.87488069523823
Camera1.cy: 244.82186155156745

# Pinhole distortion parameters
Camera1.k1: -0.08879808509797722
Camera1.k2: -0.0001562783628174476
Camera1.p1: -0.001070933148056005
Camera1.p2: 0.0004986101296769729

# Front right Camera calibration and distortion parameters (OpenCV)
Camera2.fx: 259.6642896897653
Camera2.fy: 315.37976012833934
Camera2.cx: 258.32077667030046
Camera2.cy: 244.75449418140533

# Pinhole distortion parameters
Camera2.k1: -0.09205712185830049
Camera2.k2: 0.0015326828732811738
Camera2.p1: -0.0020921346304043913
Camera2.p2: -0.00030187990097926246

# Back left Camera calibration and distortion parameters (OpenCV)
Camera3.fx: 318.03866698983046
Camera3.fy: 292.73510709379724
Camera3.cx: 312.0866650628748
Camera3.cy: 240.87987322669773

# Pinhole distortion parameters
Camera3.k1: 0.017537471878744458
Camera3.k2: -0.08593399742893987
Camera3.p1: -0.004625161021380046
Camera3.p2: -0.014879043584530892

# Back right Camera calibration and distortion parameters (OpenCV)
Camera4.fx: 272.99074228350906
Camera4.fy: 306.5962140365146
Camera4.cx: 273.5012980049076
Camera4.cy: 243.91435907918378

# Pinhole distortion parameters
Camera4.k1: -0.06371656575645053
Camera4.k2: -0.007971824837102437
Camera4.p1: -0.0027009818891767902
Camera4.p2: -0.001560804454990342

# Transformation matrix from left camera to right camera
Stereo.T_c1_c2: !!opencv-matrix
  rows: 4
  cols: 4
  dt: f
  data: [0.0000000,  0.0000000,  1.0000000, 0.0883884767, 
          0.0000000,  1.0000000,  0.0000000, 0.0, 
          -1.0000000,  0.0000000,  0.0000000, -0.0883884767 ,
          0.0 , 0.0 , 0.0 , 1.0]

# Overlapping area between images (to be updated)
Camera1.overlappingBegin: 0
Camera1.overlappingEnd: 720

Camera2.overlappingBegin: 0
Camera2.overlappingEnd: 720

Camera3.overlappingBegin: 0   #Left and Side Left
Camera3.overlappingEnd: 720

Camera4.overlappingBegin: 0   #Right and Side Right
Camera4.overlappingEnd: 720

# Camera resolution
Camera.width: 480
Camera.height: 640

# Camera frames per second
Camera.fps: 30

# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1

# Close/Far threshold. Baseline times.
Stereo.ThDepth: 40.0

# Transformation from body-frame (imu) to front left camera
IMU.T_b_c1: !!opencv-matrix
   rows: 4
   cols: 4
   dt: f
   data: [-0.7071067812,  0.0000000,  0.7071067812, 0.0625, 
          0.7071067812,  0.0000000,  0.7071067812, 0.3,
          0.0000000,  1.0000000,  0.0000000, 0.04,
          0.0, 0.0, 0.0, 1.0 ]
#from imu to front right camera
IMU.T_b_c2: !!opencv-matrix
   rows: 4
   cols: 4
   dt: f
   data: [-0.7071067812,  0.0000000,  -0.7071067812, -0.0625, 
          -0.7071067812,  0.0000000,  0.7071067812, 0.3,
          0.0000000,  1.0000000,  0.0000000, 0.04,
          0.0, 0.0, 0.0, 1.0 ]

#from imu to back left camera
IMU.T_b_c3: !!opencv-matrix
  rows: 4
  cols: 4
  dt: f
  data: [0.7071067812,  0.0000000,  0.7071067812, 0.0625,
         0.7071067812,  0.0000000, 0.7071067812, -0.3,
         0.0000000,  1.0000000,  0.0000000, 0.04,
         0.0, 0.0, 0.0, 1.0 ]

# imu to back right
IMU.T_b_c4: !!opencv-matrix
  rows: 4
  cols: 4
  dt: f
  #data: [  0.7061478,  0.0000000, -0.7080644;
  #-0.7080644,  0.0000000, -0.7061478;
  # 0.0000000,  1.0000000,  0.0000000 ]
  data: [0.7071067812,  0.0000000,  -0.7071067812, -0.0625,
         -0.7071067812,  0.0000000, -0.7071067812, -0.3,
         0.0000000,  1.0000000,  0.0000000, 0.04,
         0.0, 0.0, 0.0, 1.0 ]
# IMU noise
IMU.NoiseGyro: 0.00047514840397045084 # 0.004 (VINS) # 0.00016 (TUM) # 0.00016    # rad/s^0.5 #0.003491 rad/s (phidget imu) 0.00047514840397045084 
IMU.NoiseAcc: 0.0032171141081418185 # 0.04 (VINS) # 0.0028 (TUM) # 0.0028     # m/s^1.5  #0.0294 m/s2 (phidget imu) 0.0032171141081418185 
IMU.GyroWalk: 1.4322615524203976e-05 # 0.000022 (VINS and TUM) rad/s^1.5 #1.4322615524203976e-05 rad/s (phidget imu)
IMU.AccWalk: 0.002008277168849152 # 0.0004 (VINS) # 0.00086 # 0.00086    # m/s^2.5 #0.002008277168849152  (phidget imu)
IMU.Frequency: 250.0

#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------

# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 500 # Tested with 1250

# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2

# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8

# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 15
ORBextractor.minThFAST: 7

#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1.0
Viewer.GraphLineWidth: 0.9
Viewer.PointSize: 2.0
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3.0
Viewer.ViewpointX: 0.0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -3.5
Viewer.ViewpointF: 500.0
Viewer.imageViewScale: 0.45

#--------------------------------------------------------------------------------------------
# Map Parameters
#--------------------------------------------------------------------------------------------
# If the LoadFile doesn't exist, the system give a message and create a new Atlas from scratch
#System.LoadAtlasFromFile: "saved_map"
#
# The store file is created from the current session, if a file with the same name exists it is deleted
#System.SaveAtlasToFile: "saved_map"

However the node isnt running. More specifically i can see that in atlas only 3 cameras are loaded. Do you have any idea why the 4th camera isnt loaded and can you spot something wrong in the configuration file? Moreover do you think such implementation could work running openMAVIS without other changes? I can successfully run all other examples with the hilti datasets also as ros nodes. Thank you very much in advance!

terminal message:

_Input sensor was set to: Multi-Inertial
Loading settings from /home/aimpet/mobrob_ws/src/orb_slam3_ros_wrapper/config/multi.yaml
Camera1.k3 optional parameter does not exist...
    -Loaded camera 1
Camera2.k3 optional parameter does not exist...
    -Loaded camera 2
Camera3.k3 optional parameter does not exist...
Camera4.k3 optional parameter does not exist...
    -Loaded camera 3 & 4
Camera.newHeight optional parameter does not exist...
Camera.newWidth optional parameter does not exist...
    -Loaded image info
IMU.InsertKFsWhenLost optional parameter does not exist...
    -Loaded IMU calibration
    -Loaded ORB settings
    -Loaded viewer settings
System.LoadAtlasFromFile optional parameter does not exist...
System.SaveAtlasToFile optional parameter does not exist...
    -Loaded Atlas settings
System.thFarPoints optional parameter does not exist...
    -Loaded misc parameters
    -Computed rectification maps
----------------------------------
SLAM settings: 
    -Camera 1 parameters (Pinhole): [ 262.921 325.903 261.875 244.822 ]
    -Camera 1 distortion parameters: [  -0.0887981 -0.000156278 -0.00107093 0.00049861 ]
    -Camera 2 parameters (Pinhole): [ 259.664 315.38 258.321 244.755 ]
    -Camera 2 distortion parameters: [  -0.0920571 0.00153268 -0.00209213 -0.00030188 ]
    -Camera 3 parameters (Pinhole): [ 318.039 292.735 312.087 240.88 ]
    -Camera 4 parameters (Pinhole): [ 272.991 306.596 273.501 243.914 ]
    -Camera 3 distortion parameters: [  -0.0920571 0.00153268 -0.00209213 -0.00030188 ]
    -Camera 4 distortion parameters: [  -0.0637166 -0.00797183 -0.00270098 -0.0015608 ]
    -Original image size: [ 480 , 640 ]
    -Current image size: [ 480 , 640 ]
    -Camera 1 parameters after rectification: [  320.641 320.641 24433.5 -19897.5 ]
    -Sequence FPS: 30
    -Stereo baseline: 0.125
    -Stereo depth threshold : 40
    -Features per image: 500
    -ORB scale factor: 1.2
    -ORB number of scales: 8
    -Initial FAST threshold: 15
    -Min FAST threshold: 7

Loading ORB Vocabulary. This could take a while...
[ INFO] [1724853033.216020745]: rviz version 1.14.25
[ INFO] [1724853033.216048475]: compiled against Qt version 5.12.8
[ INFO] [1724853033.216054643]: compiled against OGRE version 1.9.0 (Ghadamon)
[ INFO] [1724853033.221807701]: Forcing OpenGl version 0.
[ INFO] [1724853033.357413244]: Stereo is NOT SUPPORTED
[ INFO] [1724853033.357452318]: OpenGL device: Mesa Intel(R) Graphics (ADL GT2)
[ INFO] [1724853033.357463178]: OpenGl version: 4,6 (GLSL 4,6) limited to GLSL 1.4 on Mesa system.
Vocabulary loaded!

Initialization of Atlas from scratch 
Creation of new map with id: 0
Creation of new map with last KF id: 0
Seq. Name: 
There are 3 cameras in the atlas
Camera 0 is pinhole
Camera 4 is pinhole
Camera 6 is pinhole
Starting the Viewer
[orb_slam3_multi-1] process has died [pid 121530, exit code -11, cmd /home/aimpet/mobrob_ws/devel/lib/orb_slam3_ros_wrapper/orb_slam3_ros_wrapper_multi /camera0/image_raw:=/camera0/image_raw /camera1/image_raw:=/camera1/image_raw /camera2/image_raw:=/camera2/image_raw /camera3/image_raw:=/camera3/image_raw /imu:=/imu/data __name:=orb_slam3_multi __log:=/home/aimpet/.ros/log/119acc2e-653a-11ef-abe3-5dba2527f57e/orb_slam3_multi-1.log].
log file: /home/aimpet/.ros/log/119acc2e-653a-11ef-abe3-5dba2527f57e/orb_slam3_multi-1*.log
^C[orb_slam3_ros/trajectory_server_orb_slam3-3] killing on exit
[rviz-2] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
done
1fwang commented 3 months ago

There are 3 cameras in the atlas Camera 0 is pinhole Camera 4 is pinhole Camera 6 is pinhole

Hi, I guess the issue is caused by "missing Camera 2", since in line 566 of Tracking.cpp, it will only introuduce additional camera if the camera model is KannalaBrandt, maybe you can try to modify this part to see if it works.

BTW, I am also curious about the performance of openMAVIS runs on your own setup. Would you mind sharing the video when you successfully solve the issue?

BR, Yifu

aimiliosnrobotics commented 3 months ago

Hello Yifu, thank you very much for the quick response. I will take a look into it. Of course if i can make it run i will tell you about its performance and share results. BR, Aimilios

jieli1990 commented 3 months ago

@aimiliosnrobotics Can it run?

aimiliosnrobotics commented 3 months ago

Hello @jieli1990, i solved the issue with the 2nd camera and everything seems to be loaded correctly, but the node still crashes. I noticed that it crashed the moment it receives the imu data.

1fwang commented 3 months ago

Hello @jieli1990, i solved the issue with the 2nd camera and everything seems to be loaded correctly, but the node still crashes. I noticed that it crashed the moment it receives the imu data.

Maybe have a look at the timestamps. Or you can post the logs here we can have a look.

jieli1990 commented 3 months ago

I have the same issue, and I'm planning to debug it

aimiliosnrobotics commented 3 months ago

rosout.log roslaunch-aimpet-Victus-by-HP-Laptop-16-d1xxx-20495.log master.log @1fwang thanks, here are the relevant log files when i run the open navis node. There could be a synchronization issue between imu and cameras and there may be the need for VI synchronization.

aimiliosnrobotics commented 3 months ago

So @1fwang @jieli1990 I was able to run the node but with the old configuration file HiltiChallenge2022.yaml and then have the node subscribe to my camera and imu topics. The node then doesnt crash but of course anything is being extracted as all camera settings, types and transformations are wrong. I tried to change step by step the configuration file to adjust it for my case, but the moment i change the cameras to "PinHole" model and adjust the mandatory Camera1.p1 Camera1.p2 etc. the node crashes immediately. (It doesnt matter what it receives first it just crashes when all the topics are being published and the node is running). So i guess the issue hasnt anything to do with the imu and just the OpenMAVIS package needs more changes to work with PinHole cameras. Do you have anything in mind in what should be changed ? Thanks in advance!

jieli1990 commented 3 months ago

@aimiliosnrobotics I'm currently debugging, and I've identified that the issue occurs in the construct Frame method, which is before the Track method. The specific problem still needs further investigation

aimiliosnrobotics commented 3 months ago

@jieli1990 thank you very much for the info. I will also look into this and share if i find anything relevant.

jieli1990 commented 3 months ago

@aimiliosnrobotics The issue is in 'Frame.cc lines 1546 and 1547.' Since it has been changed to a pinhole camera type, the corresponding types on these two lines also need to be modified.

aimiliosnrobotics commented 3 months ago

@jieli1990 have you corrected also the issue that class ORB_SLAM3::Pinhole has no members: mvLappingArea ? or you just copied these functions in the Pinhole class ?

jieli1990 commented 3 months ago

@aimiliosnrobotics You don't need to consider this parameter. You can refer to the default stereo frame construction in ORB-SLAM3 and modify it accordingly. Since you have already switched to the default pinhole camera mode, there's no need to calculate the overlapping area for the fisheye lens anymore

jieli1990 commented 3 months ago

@aimiliosnrobotics Change the original line thread threadLeft(&Frame::ExtractORB, this, 0, bleft, imLeft, static_cast<KannalaBrandt8>(mpCamera)->mvLappingArea[0], static_cast<KannalaBrandt8>(mpCamera)->mvLappingArea[1]); to thread threadLeft(&Frame::ExtractORB, this, 0, bleft, imLeft, 0, 0);

aimiliosnrobotics commented 3 months ago

i changed these 2 lines like this:

thread threadLeft(&Frame::ExtractORB, this, 0, bleft, imLeft, 0, 0);
thread threadRight(&Frame::ExtractORB, this, 1, bright, imRight, 0, 0);

but i was thrown with a thread error:

terminate called after throwing an instance of 'std::system_error'
  what():  Operation not permitted

Is it something im missing? or the problem with the threads is coming from elsewhere? Did you manage to run your implementation?

jieli1990 commented 3 months ago

@aimiliosnrobotics I'm currently debugging

aimiliosnrobotics commented 3 months ago

@jieli1990 For me i think its the case that no scale level info nor any keys nor descriptors are found and the code crushes at line 1567 in the return;. So somehow the orbextractor isnt handled correctly for the 4 mono cameras. I printed the scale level and then the extracted keypoints.
ORB INFO: mnScaleLevels: 0 mfScaleFactor: 0 mfLogScaleFactor: 0 mvScaleFactors: 0 mvInvScaleFactors: 0 mvLevelSigma2: 0 mvInvLevelSigma2: 0 Number of keypoints: Left Camera: 0 Right Camera: 0 Side Left Camera: 0 Side Right Camera: 0 Total Keypoints: 0 I will try to look into that.

jieli1990 commented 3 months ago

@aimiliosnrobotics I encountered the same problem

jieli1990 commented 3 months ago

@aimiliosnrobotics System.cc line 342 :// if(settings && settings->needToRectify()){ // cv::Mat M1l = settings->M1l(); // cv::Mat M2l = settings->M2l(); // cv::Mat M1r = settings->M1r(); // cv::Mat M2r = settings->M2r();

// cv::remap(imLeft, imLeftToFeed, M1l, M2l, cv::INTER_LINEAR); // cv::remap(imRight, imRightToFeed, M1r, M2r, cv::INTERLINEAR); // } // else if(settings && settings->needToResize()){ // cv::resize(imLeft,imLeftToFeed,settings->newImSize()); // cv::resize(imRight,imRightToFeed,settings->newImSize()); // cv::resize(imSideLeft,imSideLeftToFeed,settings->newImSize()); // cv::resize(imSideRight,imSideRightToFeed,settings_->newImSize()); // } // else{ imLeftToFeed = imLeft.clone(); imRightToFeed = imRight.clone(); imSideLeftToFeed = imSideLeft.clone(); imSideRightToFeed = imSideRight.clone(); // }

1fwang commented 3 months ago

@jieli1990 @aimiliosnrobotics Have went through all your discussion, It might be a little bit annoying to add the support of 2 pinhole cameras. Just curious that have you consider to re-calibrate the camera to fisheye model for testing? I think that may be the easiest way to run your own cameras...

jieli1990 commented 3 months ago

@1fwang That's right, but after some modifications, the pinhole camera can now run very well too. Thank you for your suggestion

aimiliosnrobotics commented 3 months ago

@jieli1990 Thanks for the update. Now the program runs indeed without crashing. Did you have to change anything else so that it runs properly? In my case everything is running but i cant see anything detected in the viewer and the slam isnt properly working (cant see the pointcloud neither the trajectory is updated when i move the camera). The map points are all 0 all the time, so i think right now no features/points cant be detected. Btw are you passing rectified images or the camera_raw from your mono cameras? Did you alter at all the orb parameters in the yaml file? Its most probably a camera parameters or calibration issue however I think.

jieli1990 commented 3 months ago

@aimiliosnrobotics I made a lot of changes and recalibrated the external parameters(https://github.com/UZ-SLAMLab/ORB_SLAM3/issues/736). I recommend sticking with the default settings and using a fisheye camera.

aimiliosnrobotics commented 3 months ago

@jieli1990 thanks for the reference. In the end were you able though to make it run successfully with a multi monocular pinhole + IMU camera?

jieli1990 commented 3 months ago

@aimiliosnrobotics It can be run successfully

aimiliosnrobotics commented 3 months ago

@jieli1990 the weird thing is that i can successfully run with my setup the single camera+IMU implementation (mono_inertial) for all 4 cameras separately plus the imu, when i tested it again with more initial motion, but when i run the multi camera implementation i always get these initialization errors like the ones shown in this issue: (https://github.com/UZ-SLAMLab/ORB_SLAM3/issues/736), although I'm using the same orb settings in both yaml files. Does the implementation with the 4 cameras need more precision in the extrinsics and initrinsics of the cameras+IMU maybe ? because my calibration is not good, but the implementation run successfully for the 1 mono+IMU case.

mono_imu _[running 1monocamera+IMU setup]

The output in the terminal for the multi-camera setup is always the same, independent from the used dataset.

There are 4 cameras in the atlas
Camera 0 is pinhole
Camera 2 is pinhole
Camera 4 is pinhole
Camera 6 is pinhole
Starting the Viewer
Empty IMU measurements vector!!!
not IMU meas
not IMU meas
First KF:0; Map init KF:0
New Map created with 0 points
TRACK_REF_KF: Less than 15 matches!!
Fail to track local map!
LM: Active map reset recieved
LM: Active map reset, waiting...
LM: Reseting current map in Local Mapping...
LM: End reseting Local Mapping...
LM: Reset free the mutex
LM: Active map reset, Done!!!
mnFirstFrameId = 2
mnInitialFrameId = 0
1 Frames set to lost
not IMU meas
First KF:1; Map init KF:0
New Map created with 0 points
TRACK_REF_KF: Less than 15 matches!!
Fail to track local map!
LM: Active map reset recieved
LM: Active map reset, waiting...
LM: Reseting current map in Local Mapping...
LM: End reseting Local Mapping...
LM: Reset free the mutex
LM: Active map reset, Done!!!
mnFirstFrameId = 5
mnInitialFrameId = 3
2 Frames set to lost
not IMU meas
First KF:2; Map init KF:1
New Map created with 0 points
TRACK_REF_KF: Less than 15 matches!!
Fail to track local map!
LM: Active map reset recieved
LM: Active map reset, waiting...
LM: Reseting current map in Local Mapping...
LM: End reseting Local Mapping...
LM: Reset free the mutex
LM: Active map reset, Done!!!
mnFirstFrameId = 8
mnInitialFrameId = 6
3 Frames set to lost
not IMU meas
First KF:3; Map init KF:2
New Map created with 0 points
TRACK_REF_KF: Less than 15 matches!!
Fail to track local map!
LM: Active map reset recieved
LM: Active map reset, waiting...
LM: Reseting current map in Local Mapping...
LM: End reseting Local Mapping...
LM: Reset free the mutex
LM: Active map reset, Done!!!
mnFirstFrameId = 11
mnInitialFrameId = 9
4 Frames set to lost
not IMU meas
.
.
.
61 Frames set to lost
not IMU meas
First KF:61; Map init KF:60
New Map created with 0 points
TRACK_REF_KF: Less than 15 matches!!
Fail to track local map!
LM: Active map reset recieved
LM: Active map reset, waiting...
LM: Reseting current map in Local Mapping...
LM: End reseting Local Mapping...
LM: Reset free the mutex
LM: Active map reset, Done!!!
mnFirstFrameId = 185
mnInitialFrameId = 183
62 Frames set to lost
not IMU meas
First KF:62; Map init KF:61
New Map created with 0 points
1fwang commented 2 months ago

@jieli1990 the weird thing is that i can successfully run with my setup the single camera+IMU implementation (mono_inertial) for all 4 cameras separately plus the imu, when i tested it again with more initial motion, but when i run the multi camera implementation i always get these initialization errors like the ones shown in this issue: (UZ-SLAMLab/ORB_SLAM3#736), although I'm using the same orb settings in both yaml files. Does the implementation with the 4 cameras need more precision in the extrinsics and initrinsics of the cameras+IMU maybe ? because my calibration is not good, but the implementation run successfully for the 1 mono+IMU case.

mono_imu _[running 1monocamera+IMU setup]

The output in the terminal for the multi-camera setup is always the same, independent from the used dataset.

There are 4 cameras in the atlas
Camera 0 is pinhole
Camera 2 is pinhole
Camera 4 is pinhole
Camera 6 is pinhole
Starting the Viewer
Empty IMU measurements vector!!!
not IMU meas
not IMU meas
First KF:0; Map init KF:0
New Map created with 0 points
TRACK_REF_KF: Less than 15 matches!!
Fail to track local map!
LM: Active map reset recieved
LM: Active map reset, waiting...
LM: Reseting current map in Local Mapping...
LM: End reseting Local Mapping...
LM: Reset free the mutex
LM: Active map reset, Done!!!
mnFirstFrameId = 2
mnInitialFrameId = 0
1 Frames set to lost
not IMU meas
First KF:1; Map init KF:0
New Map created with 0 points
TRACK_REF_KF: Less than 15 matches!!
Fail to track local map!
LM: Active map reset recieved
LM: Active map reset, waiting...
LM: Reseting current map in Local Mapping...
LM: End reseting Local Mapping...
LM: Reset free the mutex
LM: Active map reset, Done!!!
mnFirstFrameId = 5
mnInitialFrameId = 3
2 Frames set to lost
not IMU meas
First KF:2; Map init KF:1
New Map created with 0 points
TRACK_REF_KF: Less than 15 matches!!
Fail to track local map!
LM: Active map reset recieved
LM: Active map reset, waiting...
LM: Reseting current map in Local Mapping...
LM: End reseting Local Mapping...
LM: Reset free the mutex
LM: Active map reset, Done!!!
mnFirstFrameId = 8
mnInitialFrameId = 6
3 Frames set to lost
not IMU meas
First KF:3; Map init KF:2
New Map created with 0 points
TRACK_REF_KF: Less than 15 matches!!
Fail to track local map!
LM: Active map reset recieved
LM: Active map reset, waiting...
LM: Reseting current map in Local Mapping...
LM: End reseting Local Mapping...
LM: Reset free the mutex
LM: Active map reset, Done!!!
mnFirstFrameId = 11
mnInitialFrameId = 9
4 Frames set to lost
not IMU meas
.
.
.
61 Frames set to lost
not IMU meas
First KF:61; Map init KF:60
New Map created with 0 points
TRACK_REF_KF: Less than 15 matches!!
Fail to track local map!
LM: Active map reset recieved
LM: Active map reset, waiting...
LM: Reseting current map in Local Mapping...
LM: End reseting Local Mapping...
LM: Reset free the mutex
LM: Active map reset, Done!!!
mnFirstFrameId = 185
mnInitialFrameId = 183
62 Frames set to lost
not IMU meas
First KF:62; Map init KF:61
New Map created with 0 points

I recommand you can try using stereo camera pair first. To see if it works for you. From the printed output I suspect it might caused by imperfect calibration or synchonization.

aimiliosnrobotics commented 2 months ago

@1fwang thank you very much for the tip! I think you are correct, my multicameras and cameras-imu calibration is still not good. I will try to improve it. How could i take care of the synchronization issue? Calibration with Kalibr returns also a time shift value in the calibration results. How could i take it into account when using openMAVIS? Thanks in advance!

1fwang commented 2 months ago

@1fwang thank you very much for the tip! I think you are correct, my multicameras and cameras-imu calibration is still not good. I will try to improve it. How could i take care of the synchronization issue? Calibration with Kalibr returns also a time shift value in the calibration results. How could i take it into account when using openMAVIS? Thanks in advance!

I didn't implement this in the openMAVIS, but you could try adding a timeshift to the IMU timestamps during the IMU data pre-loading. This approach is only valid if all cameras are well-synchronized and the IMU is not. Since you mentioned that each monocular camera-IMU setup works well, there likely isn't a severe synchronization issue causing the failure. I recommend looking more closely at the extrinsics. Your output "New Map created with 0 points" suggests that no 3D points were triangulated during the stereo initialization stage. However, the Stereo.T_c1_c2 matrix should be close to an identity matrix for rotation, but that doesn't seem to be the case in your calibration file.

aimiliosnrobotics commented 2 months ago

@1fwang why should it be close to identity, as my cameras have a 90 degrees rotation between them. one looking 45 degrees to the right and one to the left. As a result the have a smaller overlapping region. Do you think this setup could be a problem performing multicamera slam?

1fwang commented 2 months ago

@1fwang why should it be close to identity, as my cameras have a 90 degrees rotation between them. one looking 45 degrees to the right and one to the left. As a result the have a smaller overlapping region. Do you think this setup could be a problem performing multicamera slam?

I got your point. For such case, the limited overlapping area may impact the initialization, if there are not sufficent landmarks in between, the initialization would fail. Maybe you can try to increase the number of features you are using from 500 to 1250.

aimiliosnrobotics commented 2 months ago

@1fwang yes i already tried with 1250 1500 and even more. But still my Stereo.T_c1_c2 as well as the T.b_c1, T.b_c2, T.b_c3 and T.b_c4 are not accurate enough as i had problems with calibration. I think i should fix this issue before proceeding with any further alterations. In the pangolin viewer i can see for all 4 cameras Mps: 0 and Matches: 0

aimiliosnrobotics commented 2 months ago

@1fwang @jieli1990 I changed for testing the configuration of my front cameras to be in parallel, so that there is complete overlap in order to correct the lack of overlapping region. I managed to calibrate them as well as properly calibrate all the camera-imu pairs and changed accordingly the configuration file:

%YAML:1.0

#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------
File.version: "1.0"

Camera.type: "PinHole"

# Front Left Camera calibration and distortion parameters (OpenCV)
Camera1.fx: 261.7774568443223
Camera1.fy: 261.4213090945677
Camera1.cx: 314.78845743404094
Camera1.cy: 244.98018387560458

# Pinhole distortion parameters
Camera1.k1: -0.07625197724400767
Camera1.k2: -0.0060102183246690385
Camera1.p1: -0.0006912969140677971
Camera1.p2: 0.0008002288751723359

# Front right Camera calibration and distortion parameters (OpenCV)
Camera2.fx: 262.02936726041224
Camera2.fy: 262.0001309029258
Camera2.cx: 322.86655652193826
Camera2.cy: 242.38410339327186

# Pinhole distortion parameters
Camera2.k1: -0.0634002514680055
Camera2.k2: -0.01306052665455828
Camera2.p1: 0.0003265563147982843
Camera2.p2: -0.0010293734670212237

# Back left Camera calibration and distortion parameters (OpenCV)
Camera3.fx: 296.5667150171772
Camera3.fy: 295.93553039502285
Camera3.cx: 318.7600246248298
Camera3.cy: 247.54971946838546

# Pinhole distortion parameters
Camera3.k1: 0.009415652704260452
Camera3.k2: -0.04739159421795972
Camera3.p1: -0.0005812573729222417
Camera3.p2: -0.0008056715951873222

# Back right Camera calibration and distortion parameters (OpenCV)
Camera4.fx: 257.3845212927142
Camera4.fy: 256.20951679088733
Camera4.cx: 317.52689278419507
Camera4.cy: 241.77539627522034

# Pinhole distortion parameters
Camera4.k1: -0.08378724694195346
Camera4.k2: -0.0011871330794131138
Camera4.p1: 0.0004342467537907225
Camera4.p2: -0.0011963561313667228

# Transformation matrix from left camera to right camera
Stereo.T_c1_c2: !!opencv-matrix
  rows: 4
  cols: 4
  dt: f
  data: [0.99893805, -0.02816905, -0.03645935, 0.11399069,
         0.02797767, 0.99959202, -0.00574886, 0.00318766,
         0.03660642, 0.0047227, 0.9993186, 0.00224665,
         0.0,            0.0,          0.0,           1.0]

# Transformation matrix from right camera to left camera
#Stereo.T_c1_c2: !!opencv-matrix
#  rows: 4
#  cols: 4
#  dt: f
#  data: [0.9989855, -0.0288062, -0.0346154, -0.11403414,
#         0.0286680,  0.9995790, -0.0044802, 0.00009836,
#         0.0347299,  0.0034833,  0.9993907 , 0.00176484,
#         0.0,            0.0,           0.0, 1.0]
# Overlapping area between images (to be updated)
Camera1.overlappingBegin: 0
Camera1.overlappingEnd: 640        

Camera2.overlappingBegin: 0
Camera2.overlappingEnd: 640 

Camera3.overlappingBegin: 0   #Left and Side Left
Camera3.overlappingEnd: 640

Camera4.overlappingBegin: 0   #Right and Side Right
Camera4.overlappingEnd: 640

# Camera resolution

Camera.width: 640
Camera.height: 480

#Camera.newWidth: 640
#Camera.newHeight: 480

# Camera frames per second
Camera.fps: 30

# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1

# Close/Far threshold. Baseline times.
Stereo.ThDepth: 40.0

# Transformation from body-frame (imu) to left camera
IMU.T_b_c1: !!opencv-matrix
   rows: 4
   cols: 4
   dt: f
   data: [-0.99992393, 0.01138955, 0.00473435, -0.07421351,
          0.00420864, -0.04574619, 0.99894423, -0.0269732,
          0.0115941, 0.99888816, 0.04569478, -0.32632594,
          0.0, 0.0, 0.0, 1.0 ]

#camera0 to imu transform
#IMU.T_b_c1: !!opencv-matrix
#   rows: 4
#   cols: 4
#   dt: f
#   data: [-0.99992393 0.00420864 0.0115941 -0.07031088]
#[ 0.01138955 -0.04574619 0.99888816 0.32557446]
#[ 0.00473435 0.99894423 0.04569478 0.04220747]
#          0.0, 0.0, 0.0, 1.0]

#c2
#IMU.T_b_c2: !!opencv-matrix
#   rows: 4
#   cols: 4
#   dt: f
#   data: [-0.99968562, 0.02505612, 0.00091833, -0.07877293,
#          -0.00048431, -0.05591632, 0.99843534, -0.02700718,
#          0.02506827, 0.99812101, 0.05591087, -0.32708136,
#          0.0 , 0.0 , 0.0 , 1.0]

#camera1 to imu transform
#IMU.T_b_c2: !!opencv-matrix
#   rows: 4
#   cols: 4
#   dt: f
#   data: [-0.99968562, -0.00048431, 0.02506827, -0.07056189,
#           0.02505612, -0.05591632, 0.99812101, 0.32693038,
#           0.00091833, 0.99843534, 0.05591087, 0.04532467,
#           0.0 , 0.0 , 0.0 , 1.0]
#timeshift cam0 to imu0 [s] (t_imu = t_cam + shift)

#IMU.T_b_c3: !!opencv-matrix
#  rows: 4
#  cols: 4
#  dt: f
#  data: [0.70948581, 0.70471558, 0.00241793, 0.17170214,
#         0.03052121, -0.03415528, 0.99895039, -0.05600006,
#         0.70405848, -0.70866732, -0.04574146, -0.27256051,
#         0.0 , 0.0, 0.0, 1.0 ]
# Transformation from side-left camera to body-frame (imu) #from imu to back left camera
IMU.T_b_c3: !!opencv-matrix
  rows: 4
  cols: 4
  dt: f
  data: [0.70948581, 0.03052121, 0.70405848, 0.0717875,
         0.70471558, -0.03415528, -0.70866732, -0.3160686,
         0.00241793, 0.99895039, -0.04574146, 0.0430588,
         0.0, 0.0, 0.0, 1.0]

#IMU.T_b_c4: !!opencv-matrix
#  rows: 4
#  cols: 4
#  dt: f
#  data: [0.70944492, -0.70474204, 0.00515361, -0.17963957,
#         -0.01839184, -0.0112035, 0.99976808, -0.05764265,
#         -0.70452087, -0.70937517, -0.02090977, -0.2650861,
#         0.0 , 0.0, 0.0, 1.0 ]
# Transformation from side-right camera to body-frame (imu) #imu to left right

IMU.T_b_c4: !!opencv-matrix
  rows: 4
  cols: 4
  dt: f
  data: [0.70944492, -0.01839184, -0.70452087, -0.06037446,
         -0.70474204, -0.0112035, -0.70937517, -0.31529086,
         0.00515361, 0.99976808, -0.02090977, 0.05301219,
         0.0, 0.0, 0.0, 1.0]
# IMU noise
IMU.NoiseGyro: 0.00047514840397045084 # 0.004 (VINS) # 0.00016 (TUM) # 0.00016    # rad/s^0.5 #0.003491 rad/s (phidget imu) 0.00047514840397045084 
IMU.NoiseAcc: 0.0032171141081418185 # 0.04 (VINS) # 0.0028 (TUM) # 0.0028     # m/s^1.5  #0.0294 m/s2 (phidget imu) 0.0032171141081418185 
IMU.GyroWalk: 1.4322615524203976e-05 # 0.000022 (VINS and TUM) rad/s^1.5 #1.4322615524203976e-05 rad/s (phidget imu)
IMU.AccWalk: 0.002008277168849152 # 0.0004 (VINS) # 0.00086 # 0.00086    # m/s^2.5 #0.002008277168849152  (phidget imu)
IMU.Frequency: 250.0

#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------

# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 2000 # Tested with 1250

# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2

# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8

# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 15
ORBextractor.minThFAST: 7

#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1.0
Viewer.GraphLineWidth: 0.9
Viewer.PointSize: 2.0
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3.0
Viewer.ViewpointX: 0.0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -3.5
Viewer.ViewpointF: 500.0
Viewer.imageViewScale: 0.45

#--------------------------------------------------------------------------------------------
# Map Parameters
#--------------------------------------------------------------------------------------------
# If the LoadFile doesn't exist, the system give a message and create a new Atlas from scratch
#System.LoadAtlasFromFile: "saved_map"
#
# The store file is created from the current session, if a file with the same name exists it is deleted
#System.SaveAtlasToFile: "saved_map"

Moreover i also applied a timeshift that i found between imu and cameras while calibrating. Unfortunately i still get the same error as before concerning failed initialization. I had assumed it is because of the lack of overlapping region between the 2 front cameras so that a stereo triangulation wasnt possible. Thats why i changed the hardware and tried with parallel view, which provides great overlapping region. But the error persists. I printed the total features mCurrentFrame.N and i can see it is always a big enough number but then when i try to check the mCurrentFrame.mvLeftToRightMatch[i] its always -1 for all features. I also checked that the matcher.SearchByBoW(mpReferenceKF,mCurrentFrame,vpMapPointMatches) function always returns 0, even after making the threshold much bigger. So the stereo initialization cant match any feature points between the 2 cameras although there are features (printed the features at each frame for current and reference frame) and big overlap region. Moreover the calibration is good enough. Is there any other way that could lead to this issue ? Moreover im a little bit confused with the transformations given in the HiltiChallenge2022.yaml file. Camera1 is assumed to be the left camera, is that correct? but then i find some inconsistences like transformation from imu to left camera is being given but then the transformation from side left camera to imu. (see code below). Is that just a comments mistake or is this behavior expected? Moreover it said that the Tc1_c2 transform is the transformation from right to left camera, but that doesnt make sense for translation in x of 0.1. Have i possibly mixed something up concerning the position of the cameras and the transforms that are being expected by OpenMavis? Interestingly, the error i have in the multi-camera setup looks exactly the same, when i run the out of the box multi-inertial Hilti Challenge configuration but change the camera streams of the front left and right camera. That's why im thinking that maybe the error lies in the way the openMAVIS package expects the data and how i should provide the transforms in the configuration file.


%YAML:1.0

#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------
File.version: "1.0"

Camera.type: "KannalaBrandt8"

# Left Camera calibration and distortion parameters (OpenCV)
Camera1.fx: 352.8913557983792
Camera1.fy: 352.7847153190321
Camera1.cx: 346.97323452281586
Camera1.cy: 271.81100577288737

# Kannala-Brandt distortion parameters
Camera1.k1: -0.043389324930215536
Camera1.k2: 0.003107640448830285
Camera1.k3: -0.0037427054744242987
Camera1.k4: 0.0007777186933455688

# Right Camera calibration and distortion parameters (OpenCV)
Camera2.fx: 351.8009892830421
Camera2.fy: 351.6280684414713
Camera2.cx: 367.8012904752455
Camera2.cy: 254.8007945841205

# Kannala-Brandt distortion parameters
Camera2.k1: -0.04338445691750529
Camera2.k2: 0.0041968714754669675
Camera2.k3: -0.003800042168422922
Camera2.k4: 0.0006032811514173166

# Side-Left Camera calibration and distortion parameters (OpenCV)
Camera3.fx: 349.15902033416296
Camera3.fy: 349.19507664547535
Camera3.cx: 342.4496484912912
Camera3.cy: 259.7918462559407

# KannalaBrandt8 distortion parameters
Camera3.k1: -0.038573120123828596
Camera3.k2: -0.001592762006804195
Camera3.k3: 0.00026012483984812467
Camera3.k4: -0.0004093066719415896

# Side-Right Camera calibration and distortion parameters (OpenCV)
Camera4.fx: 350.4224918214405
Camera4.fy: 350.42698873973904
Camera4.cx: 364.45586188748047
Camera4.cy: 266.1758342332095

# KannalaBrandt8 distortion parameters
Camera4.k1: -0.037908345844681106
Camera4.k2: -0.00012037903855186935
Camera4.k3: -0.0011713947305290857
Camera4.k4: 4.0049266665953e-05

# Transformation matrix from right camera to left camera
Stereo.T_c1_c2: !!opencv-matrix
  rows: 4
  cols: 4
  dt: f
  data: [ 0.9999810732443721, 0.004682955447953159, -0.003990373579562297, 0.10914435294524377,
          -0.004690025079057252, 0.9999874456240819, -0.001764159567587884, 0.00035092962959468014,
          0.003982062002254234, 0.001782841129933816, 0.9999904822845636, 0.00042162596612784647,
          0, 0, 0, 1]

# Overlapping area between images (to be updated)
Camera1.overlappingBegin: 0
Camera1.overlappingEnd: 720

Camera2.overlappingBegin: 0
Camera2.overlappingEnd: 720

Camera3.overlappingBegin: 0   #Left and Side Left
Camera3.overlappingEnd: 720

Camera4.overlappingBegin: 0   #Right and Side Right
Camera4.overlappingEnd: 720

# Camera resolution
Camera.width: 720
Camera.height: 540

# Camera frames per second
Camera.fps: 30

# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1

# Close/Far threshold. Baseline times.
Stereo.ThDepth: 40.0

# Transformation from body-frame (imu) to left camera
IMU.T_b_c1: !!opencv-matrix
   rows: 4
   cols: 4
   dt: f
   data: [0.00348369, -0.00197191, 0.99999199, 0.04741809,
          0.99999365, 0.00076353, -0.00348219, -0.06272441,
          -0.00075666, 0.99999776, 0.00197456, -0.00873926,
          0, 0, 0, 1]

# Transformation from side-left camera to body-frame (imu)
IMU.T_b_c3: !!opencv-matrix
  rows: 4
  cols: 4
  dt: f
  data: [0.99999204, 0.00394231, -0.00062215, -0.00814651,
         -0.00059289, -0.0074175, -0.99997231, -0.08003969,
         -0.00394681, 0.99996472, -0.0074151, -0.00975499,
         0, 0, 0, 1]

# Transformation from side-right camera to body-frame (imu)
IMU.T_b_c4: !!opencv-matrix
  rows: 4
  cols: 4
  dt: f
  data: [-0.99988962, 0.01238942, 0.00820085, -0.00606784,
         0.0081229, -0.00632904, 0.99994698, 0.06457457,
         0.01244067, 0.99990322, 0.00622771, -0.00765811,
         0, 0, 0, 1]

# IMU noise
IMU.NoiseGyro: 0.000171 # 0.004 (VINS) # 0.00016 (TUM) # 0.00016    # rad/s^0.5
IMU.NoiseAcc: 0.0086 # 0.04 (VINS) # 0.0028 (TUM) # 0.0028     # m/s^1.5
IMU.GyroWalk: 3.1e-06 # 0.000022 (VINS and TUM) rad/s^1.5
IMU.AccWalk: 0.00022 # 0.0004 (VINS) # 0.00086 # 0.00086    # m/s^2.5
IMU.Frequency: 400.0

#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------

# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 500 # Tested with 1250

# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2

# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8

# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 15
ORBextractor.minThFAST: 7

#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1.0
Viewer.GraphLineWidth: 0.9
Viewer.PointSize: 2.0
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3.0
Viewer.ViewpointX: 0.0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -3.5
Viewer.ViewpointF: 500.0
Viewer.imageViewScale: 0.45

#--------------------------------------------------------------------------------------------
# Map Parameters
#--------------------------------------------------------------------------------------------
# If the LoadFile doesn't exist, the system give a message and create a new Atlas from scratch
#System.LoadAtlasFromFile: "saved_map"
#
# The store file is created from the current session, if a file with the same name exists it is deleted
#System.SaveAtlasToFile: "saved_map"

transforms

Moreover, something that i think could be relevant is that when i run the mono_inertial node for single camera+imu slam, i get in pangolin the black screen with the "waiting for images" text and then when i move the VI sensor (camera+imu) the initialization starts and then i the slam algorithm runs and thats the case for all 4 camea+imu pairs.

Starting the Viewer
[ INFO] [1725621979.344532210]: Finished waiting for tf, waited 4.000891 seconds
First KF:0; Map init KF:0
New Map created with 295 points
start VIBA 1
end VIBA 1
start VIBA 2
end VIBA 2

On the other hand, when i run the multi_camera node, i get directly the stream in pangolin for all 4 images, without waiting for initialization and in the terminal its looping:

New Map created with 0 points
TRACK_REF_KF: Less than 15 matches!!
Fail to track local map!
LM: Active map reset recieved
LM: Active map reset, waiting...
LM: Reseting current map in Local Mapping...
LM: End reseting Local Mapping...
LM: Reset free the mutex
LM: Active map reset, Done!!!
mnFirstFrameId = 4
mnInitialFrameId = 0
1 Frames set to lost
not IMU meas
First KF:1; Map init KF:0
mCurrentFrame.mvLeftToRightMatch = -1

Thank you very much in advance!

aimiliosnrobotics commented 2 months ago

UPDATE: i realized that the error was happening in the stereo matching in the MultiInitialization() in the Frame constructor. After some debugging i realized that the matching taking place in the void Frame::ComputeMultiFishEyeMatches() fct was using the overlapping regions so after removing the monoLeft and monoRight the matching succeed and i could finally run the project with the 4 pinhole cameras. I wonder if there is any issue with line:
float depth = static_cast<KannalaBrandt8*>(mpCamera)->TriangulateMatches(mpCamera2,mvKeys[(*it)[0].queryIdx],mvKeysRight[(*it)[0].trainIdx],mRlr,mtlr,sigma1,sigma2,p3D); as of course there is no method triangulateMatches in the Pinhole cameras and if this will cause any further issues. But after changing the ComputeMultiFishEyeMatches() function to this:

void Frame::ComputeMultiFishEyeMatches() {

    //std::cout << "computing fish eye matches" << std::endl;
    //Speed it up by matching keypoints in the lapping area
    //vector<cv::KeyPoint> stereoLeft(mvKeys.begin() + monoLeft, mvKeys.end());
    //vector<cv::KeyPoint> stereoRight(mvKeysRight.begin() + monoRight, mvKeysRight.end());
    vector<cv::KeyPoint> stereoLeft(mvKeys.begin(), mvKeys.end());
    vector<cv::KeyPoint> stereoRight(mvKeysRight.begin(), mvKeysRight.end());

    if (mDescriptors.rows != 0 && mDescriptorsRight.rows != 0){
        //cv::Mat stereoDescLeft = mDescriptors.rowRange(monoLeft, mDescriptors.rows);
        //cv::Mat stereoDescRight = mDescriptorsRight.rowRange(monoRight, mDescriptorsRight.rows);
        cv::Mat stereoDescLeft = mDescriptors;
        cv::Mat stereoDescRight = mDescriptorsRight;
        //TODO: for diffrent setup, sideward camera may have sufficient overlap for stereo matching
        mvLeftToRightMatch = vector<int>(Nleft,-1);
        mvRightToLeftMatch = vector<int>(Nright,-1);
        // mvDepth = vector<float>(Nleft,-1.0f);
        mvDepth = vector<float>(N,-1.0f);
        mvuRight = vector<float>(Nleft,-1);
        mvStereo3Dpoints = vector<Eigen::Vector3f>(Nleft);
        mnCloseMPs = 0;

        //Perform a brute force between Keypoint in the left and right image
        vector<vector<cv::DMatch>> matches;

        BFmatcher.knnMatch(stereoDescLeft,stereoDescRight,matches,2);
        int totalMatches = 0;
        for (size_t i = 0; i < matches.size(); ++i) {
            totalMatches += matches[i].size(); // Add the number of matches for each keypoint
        }

        int nMatches = 0;
        int descMatches = 0;

        //Check matches using Lowe's ratio
        for(vector<vector<cv::DMatch>>::iterator it = matches.begin(); it != matches.end(); ++it){
            if((*it).size() >= 2 && (*it)[0].distance < (*it)[1].distance * 0.8){
                //For every good match, check parallax and reprojection error to discard spurious matches
                Eigen::Vector3f p3D;
                descMatches++;
                float sigma1 = mvLevelSigma2[mvKeys[(*it)[0].queryIdx].octave], sigma2 = mvLevelSigma2[mvKeysRight[(*it)[0].trainIdx].octave];
                float depth = static_cast<KannalaBrandt8*>(mpCamera)->TriangulateMatches(mpCamera2,mvKeys[(*it)[0].queryIdx],mvKeysRight[(*it)[0].trainIdx],mRlr,mtlr,sigma1,sigma2,p3D);
                if(depth > 0.0001f){
                    mvLeftToRightMatch[(*it)[0].queryIdx] = (*it)[0].trainIdx;
                    mvRightToLeftMatch[(*it)[0].trainIdx] = (*it)[0].queryIdx;
                    mvStereo3Dpoints[(*it)[0].queryIdx] = p3D;
                    mvDepth[(*it)[0].queryIdx] = depth;
                    nMatches++;
                }
            }
        }
    }
    else{
        mvLeftToRightMatch = vector<int>(Nleft,-1);
        mvRightToLeftMatch = vector<int>(Nright,-1);
        mvDepth = vector<float>(N,-1.0f);
        mvuRight = vector<float>(N,-1);
        mvStereo3Dpoints = vector<Eigen::Vector3f>(Nleft);
        mnCloseMPs = 0;
        return;
    }

}

the program run without problems. However that only means that the initialization was successful, and maybe the feature matching methods that are being performed after the initialization still need alterations.

aimiliosnrobotics commented 2 months ago

So, i did various tests in the past weeks for multi-camera inertial slam and managed to get it working and acquiring numerous successful tests. (example: https://drive.google.com/file/d/1TyfjzyG3dfY0LTgESQqBWGCdQQKn1va1/view?usp=drive_link).

However there are some issues, that i still couldn't get past them.

So all in all, I'm trying to understand what the roots of failure of the algorithm are that happens in some cases also during runs that everything seems to be running smoothly. After some points (most of the times, when I'm rotating but not always) the tracking drifts completely away and it can be seen that features are not stable at all. (flickering and lost). Can I put blame on my cheap IMU and possible faulty data, or imperfect calibration between cameras and imu ? Any feedback is appreciated. Thanks in advance!

1fwang commented 2 months ago

So, i did various tests in the past weeks for multi-camera inertial slam and managed to get it working and acquiring numerous successful tests. (example: https://drive.google.com/file/d/1TyfjzyG3dfY0LTgESQqBWGCdQQKn1va1/view?usp=drive_link).

However there are some issues, that i still couldn't get past them.

  • First and most important thing, is that in some cases the slam tracking drifts away heavily and there seems that the features detected are flickering and lost. (examples: https://drive.google.com/file/d/1QvIRnqyDTVOhHtlPy6N-4A56uYJ3Gd9R/view?usp=drive_link and https://drive.google.com/file/d/1YrecWxSxwaI9lGwhW1fHhUcD3iKmC5zj/view?usp=drive_link) and then they are stabilizing again but the tracking is completely lost. The strange thing is that the algorithm fails sometimes with this behavior in sets that before it worked perfectly. I know that there is randomness in orb_slam3 but i thought that I will get different results not that in some cases the algorithm will completely fail. I'm guessing it has also to do with the IMU initialization maybe like the following case (Stereo-Inertial. Have camera drifting when camera is rotating  UZ-SLAMLab/ORB_SLAM3#150). So if thats the case that the imu is not correctly initialized, are there specific movements that must be done in order to initialize the imu correctly? Are the BAs (bundle adjustments) that i can see in the output, signs of IMU initialization ? Could also this drift and loss of slam localization be due to faulty raw imu data? In that sense could the number of features or other orb extractor parameters affect its performance or is there a way to put more trust on the extracted features and not so much in the imu as a tradeoff ? Or on the other hand, could these issues be explained with imperfect calibration?
  • Moreover sometimes there are cases where the accuracy is not so high and there is some error. Could this be due to wrong scaling caused by either faulty IMU data or imperfect calibration between cameras+imu (although this issue has significantly gotten better after calibrating more precisely) ?
  • I was also wandering why i cant see in the output any loop closure happening even when i am having obvious loop closures (example: https://drive.google.com/file/d/1UPLByhL7E3DGO-0S9L-PQqNIkD4ZLs8-/view?usp=drive_link). Or are loop closures happening in the background and correcting the total trajectory?

So all in all, I'm trying to understand what the roots of failure of the algorithm are that happens in some cases also during runs that everything seems to be running smoothly. After some points (most of the times, when I'm rotating but not always) the tracking drifts completely away and it can be seen that features are not stable at all. (flickering and lost). Can I put blame on my cheap IMU and possible faulty data, or imperfect calibration between cameras and imu ? Any feedback is appreciated. Thanks in advance!

Very impressive progress! I have roughly when through your case, I believe the most important thing is, the system is not full 6-dof motion, when the VIO system works on ground vehicles or robot, it has degeneration on specific motion(unobservable scale), there are papers evaluated the case(https://mars.cs.umn.edu/papers/KejianWu_VINSonWheels.pdf).

for the loop-closure, orb-slam has quite complex mechanism for correcting loops, however, I can see the edges are established between candidates closed-loop frame and current frame, in practical, it will then do the local-tracking between the new frame and previous 3d landmark, so there will be no apparant accumulated drift.

aimiliosnrobotics commented 2 months ago

@1fwang thank you very much for the feedback. I'm currently moving the VI sensor on a trolley cart so the degeneration on specific movements when not full 6dof sounds very familiar and might be the case and root of errors. I'll investigate more the paper you sent me and maybe as I see it I will have to include wheel or some other kind of odometry as I'm intending to use the slam on a wheeled robot that moves only in 2d space.

One separate question i had is, could it be possible to fuse the output of this slam algorithm (camera pose) to a sensor fusion algorithm ? I know that the covariance is not calculated in orb_slam3 and I dont know any slam implementations that provide the covariance. Is there any other way to include the output of the slam in a robot localization kalman filter for example?

1fwang commented 2 weeks ago

@1fwang thank you very much for the feedback. I'm currently moving the VI sensor on a trolley cart so the degeneration on specific movements when not full 6dof sounds very familiar and might be the case and root of errors. I'll investigate more the paper you sent me and maybe as I see it I will have to include wheel or some other kind of odometry as I'm intending to use the slam on a wheeled robot that moves only in 2d space.

One separate question i had is, could it be possible to fuse the output of this slam algorithm (camera pose) to a sensor fusion algorithm ? I know that the covariance is not calculated in orb_slam3 and I dont know any slam implementations that provide the covariance. Is there any other way to include the output of the slam in a robot localization kalman filter for example?

Hi @aimiliosnrobotics , if you’re open to it, would you consider creating a PR for the modifications on the Pinhole setup? I believe it would be a valuable addition to the project. Thank you!