Open LeonGoretzkatju opened 2 years ago
Hi, ORB SLAM enforces high-quality visual tracking on the local map. High quality means it requires a certain number of features being tracked both between the previous keyframe to the current frame as well as those between the local map to a new keyframe. If either of these tracks fails, ORB-SLAM is gonna treat it as a tracking failure and attempt to reinitailize the whole system. So, if the dataset contains several consecutive feature-less frames, ORB-SLAM cannot keep working without reinitialization. To relax the constrains, you can try to increase the parameters "ORBextractor.iniThFAST: 20 ORBextractor.minThFAST: 7" and the threshold for feature matching.
Hi, thank you very much for your suggestions, I will try to change the parameters of this ORB System.
------------------ 原始邮件 ------------------ 发件人: "rising-turtle/VCU_RVI_Benchmark" @.>; 发送时间: 2022年5月2日(星期一) 凌晨0:52 @.>; @.**@.>; 主题: Re: [rising-turtle/VCU_RVI_Benchmark] Problems about the yaml parameters (Issue #9)
Hi, ORB SLAM enforces high-quality visual tracking on the local map. High quality means it requires a certain number of features being tracked both between the previous keyframe to the current frame as well as those between the local map to a new keyframe. If either of these tracks fails, ORB-SLAM is gonna treat it as a tracking failure and attempt to reinitailize the whole system. So, if the dataset contains several consecutive feature-less frames, ORB-SLAM cannot keep working without reinitialization. To relax the constrains, you can try to increase the parameters "ORBextractor.iniThFAST: 20 ORBextractor.minThFAST: 7" and the threshold for feature matching.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Hi,I want to know if you test the dataset on ORB-SLAM3 succesfully.
Hi,I want to know if you test the dataset on ORB-SLAM3 succesfully.
Hi, Thank you for your interest, because I tested the ORB-SLAM3 with this dataset 2 years ago, so I cannot remember many details. I remembered that if you test with the latest ORB-SLAM3 version, which improves the robustness and loop-closure performance, you will find that ORB-SLAM3 can run in many data series, however, because many sequences are fast-motion, so maybe you need to slow down the rosbag play speed, also, you need to specially pay attention to the IMU-initialization part in ORB-SLAM3, because you need to handle with the fast motion, so you need to speed up the VIO-initialization speed and you can check the acceleration value. etc to find whether the VIO initialization part works fine.
Hi, thanks for your nice work about the RGB-D-Inertial dataset and the DUI_VIO, but I have met with a very strange problem, I have change the parameters of the yaml file same with you system (structure_core_v2.yaml), you can see my parameters as follow, however, When I try to test the Visual-Inertial mode on the ORB-SLAM3, the system will fail and reset frequently, I wonder if you have met with the same problem? or I need to change some parameters because of the difference between the feature-based SLAM and direct-method SLAM? Thanks, The parameter I use is as follows `%YAML:1.0
--------------------------------------------------------------------------------------------
Camera Parameters. Adjust them!
--------------------------------------------------------------------------------------------
Camera.type: "PinHole"
Camera calibration and distortion parameters (OpenCV)
Camera.fx: 4.59357e+02 Camera.fy: 4.59764e+02 Camera.cx: 3.32695e+02 Camera.cy: 2.58998e+02
Calibrated with https://github.com/tin1254/FMDataset_preprocessing
Camera.k1: -2.9839715720358556e-01 Camera.k2: 9.2224519780237782e-02 Camera.p1: 0 Camera.p2: 0
Camera.width: 640 Camera.height: 480
Camera frames per second
Camera.fps: 30.0
IR projector baseline times fx (aprox.)
It is not used in our case
Camera.bf: 100
Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1
Close/Far threshold. Baseline times.
ThDepth: 40.0
Deptmap values factor
DepthMapFactor: 1000.0
Transformation from body-frame (imu) to camera
Tbc: !!opencv-matrix rows: 4 cols: 4 dt: d data: [ 0.00193013, -0.999997, 0.00115338, -0.00817048, -0.999996, -0.0019327, -0.00223606, 0.015075, 0.00223829, -0.00114906, -0.999997, -0.0110795, 0, 0, 0, 1 ]
IMU noise (Use those from VINS-mono)
IMU.NoiseGyro: 0.01 # rad/s^0.5
IMU.NoiseAcc: 0.1 # m/s^1.5
IMU.GyroWalk: 0.0001 # rad/s^1.5
IMU.AccWalk: 0.001 # m/s^2.5
IMU.NoiseGyro: 1.2899119544421210e-02 # rad/s^0.5 IMU.NoiseAcc: 4.8454522205985742e-01 # m/s^1.5 IMU.GyroWalk: 1.4182773577038526e-05 # rad/s^1.5 IMU.AccWalk: 1.5002444856922922e-04 # m/s^2.5
IMU.AccFrequency: 100 IMU.GyroFrequency: 100
--------------------------------------------------------------------------------------------
ORB Parameters
--------------------------------------------------------------------------------------------
ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1000
ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2
ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8
ORB Extractor: Fast threshold
Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20 ORBextractor.minThFAST: 7
--------------------------------------------------------------------------------------------
Viewer Parameters
--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05 Viewer.KeyFrameLineWidth: 1 Viewer.GraphLineWidth: 0.9 Viewer.PointSize: 2 Viewer.CameraSize: 0.08 Viewer.CameraLineWidth: 3 Viewer.ViewpointX: 0 Viewer.ViewpointY: -0.7 Viewer.ViewpointZ: -1.8 Viewer.ViewpointF: 500`