Closed Ajithkumar1 closed 3 years ago
During the initial stage, when I let the system to be in stationary state(rest), the covariance increased but the pose didn't drift.
I think this part is easy to explain. When the sensor suite stays still, features rarely lose tracking. The state is only updated when a camera state is marginalized. Therefore, the uncertainty of the state tends to grow faster comparing to the cases when the sensor is moving.
When I place the setup to be in stationary state after travelling some distance, the pose started to drift.
This is interesting. Are you running the algorithm on a robot, such as a quadrotor? I know there are cases if the quadrotor lands hard, the estimation will drift after. This is mainly because the impact of landing causes spikes in the IMU measurements.
If possible, can you record a video of problem you described. That would be very helpful. Thanks.
Hi @ke-sun Please check the video (https://www.youtube.com/watch?v=mk8x-ne4EQk (blue points in the image are the points which have passed ransac, just a color change)) that I captured when this issue occurs. I could recreate this issue all the time.
msckf_vio works fine for all type of environmental changes and fast motion but it starts drifting when keeping it in the stationary state after traveling some distance.
I have checked it in gazebo simulation also where the camera/imu was modeled to be noise and drift free (Ideal environment). Calibration data are properly hard-coded since we cannot use kalibr in simulation.
So, I think we cannot suspect the camera/ imu hardware. I suspect that the prediction stage is somewhere wrong.
Thanks.
Do you mind also attaching some screenshots when the software is running (hopefully before, at, and after going back to stationary) and the calibration file you are using?
Hi @ke-sun ,
Please check the screen shots attached below
(i) During initialization
(ii) when stationary state is reached
(iii) After drift and online reset
Because of low quality imu, I couldn't obtain proper calibration for the real hardware stereo camera with kalibr. So, I suggest moving our discussion specific to the simulator.
The camera in the simulator is modeled to perfectly match our real stereo camera. The imu is modeled to be an ideal imu i.e no drift or noise. I have hardcoded the calibration file. Please check it below.
cam0: T_cam_imu: [0.0, -1.0, 0.0, 0.045, 0.0, 0.0, -1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0] cam_overlaps: [1] camera_model: pinhole distortion_coeffs: [0.07394672603659878, -0.14106488956890637, 0.0008504380713467068, -0.0011522090937052258] distortion_model: radtan intrinsics: [722.5932378670143, 717.9197412969958, 380.2575980332792, 217.27830648110398] resolution: [752, 480] rostopic: /stereo/left/image_raw cam1: T_cam_imu: [0.0, -1.0, 0.0, -0.015, 0.0, 0.0, -1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0] T_cn_cnm1: [1, 0.0, 0.0, -0.06, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0] cam_overlaps: [0] camera_model: pinhole distortion_coeffs: [0.06663823139103087, -0.11688578919424587, 0.0008291650536117562, 0.0003060373146132608] distortion_model: radtan intrinsics: [725.347232922567, 721.0548717191193, 379.3926669652834, 217.58649616516692] resolution: [752, 480] rostopic: /stereo/right/image_raw T_imu_body: [1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000]
Please check this new video( https://www.youtube.com/watch?v=MUytXu6eLmQ ) illustrating the issue recreated in the simulator. The name of the transformed pose of the video is the concatenated version of pose resultant pose(name:"robot_frame_vio") and the predicted pose, the pose before measurement update(name:"predicted_pose").
I noticed numerically that during the normal motion the measurement update aids the predicted pose but it doesn't when the drift occurs.
So, I suspect the prediction stage. Since, Runge kutta method being a numerical technique, it may have some issues like what is discussed in the paper "High-Precision, Consistent EKF-based Visual-Inertial Odometry"
The calibration params looks fine.
It seems to me that the estimation is already diverging during the rapid rotation. Therefore, what happens after that is unpredictable. Will the problem you have described still happen without the rapid rotation?
Hi @ke-sun
Yes, the problem happens without rapid rotation. But, it takes some extra time. But this drift occurs all the time. I have plotted the imu values. It is proper. Please tell me if I need to do any other experiments so that you can understand the problem better.
Thanks.
Honestly, I am also confused up to this point.
When the sensor comes back to stationary in the simulation, both the gyro and acc(gravity removed) should produce 0.0. Assuming the feature measurements are also consistent with the true pose, it seems the only reasonable assumption is that the IMU bias is estimated wrongly.
Just a few more things to note:
I have tested the just process model with simulation (noise free IMU) with out measurement update before. I remembered it took a pretty long time to have the numerical integration diverge from the true pose, if the IMU frequency is high (>100Hz).
Hi @ke-sun
Sorry for the delayed response.
Here are my answers.
When the sensor comes back to stationary in the simulation, both the gyro and acc(gravity removed) should produce 0.0. I have plotted the values, both acc(gravity removed) and gyro produce zero
In the simulation, does the robot has dynamics or only the kinematics is simulated? The rover is simulated with dynamics. I have used ardupilot gazebo plugin to run the rover with joystick.
I would assume you use the IMU plugin in the gazebo. I have used the well known hector ros gazebo imu plugin.
I remembered it took a pretty long time to have the numerical integration diverge from the true pose, if the IMU frequency is high (>100Hz). My imu frequency is 950 hz just to make sure the camera-imu sync is proper.
Do I need to conduct any other experiments so that It will be helpful for you to analyze. I could provide you the rosbag or even the complete simulation setup itself If you are interested.
Thanks, Ajith Kumar
"During the initial stage, when I let the system to be in stationary state(rest), the covariance increased but the pose didn't drift.
I think this part is easy to explain. When the sensor suite stays still, features rarely lose tracking. The state is only updated when a camera state is marginalized. Therefore, the uncertainty of the state tends to grow faster comparing to the cases when the sensor is moving."
Would a suitable solution be to have a maximum lifetime for a feature point? If a point is seen for more than this max lifetime, you would go ahead and use it in the filter and force the algorithm to choose a new feature location...thoughts?
@Ajithkumar1
Probably you can reduce the frequency of IMU to around 200hz to see if that helps.
HI @Ajithkumar1 @ke-sun I also faced this problem in msckf_vio.
We collect data on car using mynteye camera, when car meets traffic lights the sensor comes back to stationary and position diverging unless add zupt algorithm.
The state position(px,py,pz) and velocity integration(pvx,pvy,pvz) graph as below: As shown in the above figure, from the movement to the stationary, the position curve has a certain divergence, but the speed integration is relatively stable.
This data movement trajectory as below(red is rtk data, and blue is the estimated results):
I will upload data after hours. Thanks
Hi all, I am facing a strange behavior in msckf_vio. During the initial stage, when I let the system to be in stationary state(rest), the covariance increased but the pose didn't drift. When I move the setup to a certain distance, the algorithm worked fine even for intense movements. I could observe only a negligible amount of pose error (So, this issue is not related to https://github.com/KumarRobotics/msckf_vio/issues/22). When I place the setup to be in stationary state after travelling some distance, the pose started to drift.
I hope this issue is not related to the image sensor and imu as I could recreate this error both in simulation(gazebo) as well as in the real world.
Thanks.