Open BartKevers opened 3 years ago
Were you able to fix it? I am facing similar issues when running Mon-Inertial SLAM with phone.
Were you able to fix it? I am facing similar issues when running Mon-Inertial SLAM with phone.
@vikshree no unfortunately not yet. I'll post it here once I find a solution
I have the same problem with mobile data
I have the same proble with zed2 camera
Hi I got the same problem. Have you solved this problem yet?
Hi I got the same problem. Have you solved this problem yet?
@YanhaoZhang no unfortunately I have not managed to find the root problem and a solution for it
Also have the same issue. I also saw this output NOT INERTIAL LINK TO PREVIOUS FRAME!
quite often
I am facing the same issue when i try to run ORBSLAM3 mono-i/stereo-i with Unity. It also works well without imu (mono/stereo).
Hello, I have a similar problem. I'm trying to map a simulated environment on Gazebo, for this I use the inertial monocular SLAM, here's what happens. First, it initialises and displays :
[ros2-1] Starting the Viewer
[ros2-1] First KF:0; Map init KF:0
[ros2-1] New Map created with 124 points
[ros2-1] scale too small
[ros2-1] Not enough motion for initializing. Reseting...
[ros2-1] LM: Reseting current map in Local Mapping...
[ros2-1] LM: End reseting Local Mapping...
[ros2-1] LM: Reset free the mutex
[ros2-1] Fail to track local map!
[ros2-1] IMU is not or recently initialized. Reseting active map...
[ros2-1] SYSTEM-> Reseting active map in monocular case
[ros2-1] LM: Active map reset recieved
[ros2-1] LM: Active map reset, waiting...
[ros2-1] LM: Reseting current map in Local Mapping...
[ros2-1] LM: End reseting Local Mapping...
[ros2-1] LM: Reset free the mutex
[ros2-1] LM: Active map reset, Done!!!
[ros2-1] mnFirstFrameId = 288
[ros2-1] mnInitialFrameId = 0
[ros2-1] 50 Frames set to lost
[ros2-1] First KF:13; Map init KF:0
[ros2-1] New Map created with 152 points
Once it has stabilised, I get messages confirming that the SLAM has been initialised :
[ros2-1] start VIBA 1
[ros2-1] end VIBA 1
[ros2-1] start VIBA 2
[ros2-1] end VIBA 2
But then, it doesn't find any local maps, I get these messages as soon as my robot moves and detects keyframes :
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
...
Until I get a 'covisible list empty’ message, which leads to segmentation fault :
...
[ros2-1] Covisible list empty
[ros2-1] Covisible list empty
[ros2-1] Covisible list empty
[ros2-1] Fail to track local map!
[ros2-1] Covisible list empty
[ros2-1] Covisible list empty
[ros2-1] Covisible list empty
[ros2-1] Fail to track local map!
[ros2-1] Covisible list empty
[ros2-1] Covisible list empty
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] Fail to track local map!
[ros2-1] [ros2run]: Segmentation fault
I'm using ideal parameters to get an initial result, so there's no distortion and very little noise on the IMU. Here's my configuration:
[ros2-1] Camera Parameters:
[ros2-1] - Camera: Pinhole
[ros2-1] - Image scale: 1
[ros2-1] - fx: 312.888
[ros2-1] - fy: 312.888
[ros2-1] - cx: 312.283
[ros2-1] - cy: 135.73
[ros2-1] - k1: 0
[ros2-1] - k2: 0
[ros2-1] - p1: 0
[ros2-1] - p2: 0
[ros2-1] - fps: 20
[ros2-1] - color order: BGR (ignored if grayscale)
[ros2-1]
[ros2-1] ORB Extractor Parameters:
[ros2-1] - Number of Features: 1500
[ros2-1] - Scale Levels: 12
[ros2-1] - Scale Factor: 1.2
[ros2-1] - Initial Fast Threshold: 20
[ros2-1] - Minimum Fast Threshold: 7
[ros2-1]
[ros2-1] Left camera to Imu Transform (Tbc):
[ros2-1] [1, 0, 0, -0.0080000004;
[ros2-1] 0, 1, 0, 0.044;
[ros2-1] 0, 0, 1, 0.0070000002;
[ros2-1] 0, 0, 0, 1]
[ros2-1]
[ros2-1] IMU frequency: 200 Hz
[ros2-1] IMU gyro noise: 2e-06 rad/s/sqrt(Hz)
[ros2-1] IMU gyro walk: 2e-06 rad/s^2/sqrt(Hz)
[ros2-1] IMU accelerometer noise: 2e-06 m/s^2/sqrt(Hz)
[ros2-1] IMU accelerometer walk: 2e-06 m/s^3/sqrt(Hz)
[ros2-1] There are 1 cameras in the atlas
[ros2-1] Camera 0 is pinhole
I use Ubuntu 22.04 and ROS2 humble.
Thanks in advance
Make sure the acceleration and rotation are not constant. Monocular-Inertial SLAM like ORB-SLAM3 requires enough IMU excitation for proper initialization, which means motion with non-constant acceleration and angular velocity. When the scale is initialized to a small value, this means the system cannot estimate it properly, which is usually caused by not aggressive enough motion.
Make sure the acceleration and rotation are not constant. Monocular-Inertial SLAM like ORB-SLAM3 requires enough IMU excitation for proper initialization, which means motion with non-constant acceleration and angular velocity. When the scale is initialized to a small value, this means the system cannot estimate it properly, which is usually caused by not aggressive enough motion.
Hello. May I ask why enough IMU excitation means "non-constant acceleration and angular velocity"? From the code of ORB-SLAM3 I can only see that it requires non-constant (average local) acceleration measurement (i.e. it contains the affect of gravity), but I don't understand what this condition means? Does it mean (roll or pitch) rotation and non-constant real acceleration?
I would like to use ORB-SLAM3 on images created in Blender. I am running a camera in a circle around a stationary target.
When I feed the rendered images to ORB-SLAM3 in monocular mode, it works without any issues. However, mono+inertial gives a problem. It successfully detects and tracks features until the IMU initializes. Then it fails to track the local map. This continues for the full duration of the simulation. I assume the issue is the data fusion of the inertial and visual measurements, but I have no idea how to solve this. I successfully ran the EuroC visual inertial datasets, so the ORB-SLAM algorithm should work correctly.
The environment is as follows. The camera moves in a constant (rotational and translational) velocity, so I manually computed the (ideal) IMU measurements according to
omega = 2pi / T V = 2pi*r / T a = V^2 / r
I also added the gravitational acceleration. Since r=5m, and T=100s, I obtained the following constant measurements The IMU reference frame used has the same orientation as the opencv standard reference frame., so the transformation matrix is the identity matrix.
I used the following yaml file:
I double checked the camera intrinsics from Blender, they should be right. my camera sensor size is 36x27 mm and the focal length of the lens used is 80 mm. Since I use 'ideal' imu measurements, there is no noise and walk in the gyro and accelerometer measurements.
So why is the local map lost after the IMU initializes and how do I solve it? Thanks in advance!