HKUST-Aerial-Robotics / VINS-Fusion

An optimization-based multi-sensor state estimator
GNU General Public License v3.0
3.51k stars 1.39k forks source link

having the error "Not enough features or parallax; Move device around" when running VINS Mono+IMU on tx2 #69

Closed mmp52 closed 4 years ago

mmp52 commented 5 years ago

Hello,

I am having the error "Not enough features or parallax; Move device around" when running VINS Mono+IMU on tx2. I have a 1280*720 camera. I have an initial guess of the transformation matrix, even though I try the algorithm with estimate_extrinsic = 0 , 1 or 2 none of them works. I have tried to increase the feature number to 400, it also did not work. Moving the device was also useless for me. I have checked the IMU signal and visualized camera track, camera seems to detect the features well and IMU is working at around 50 hz, increasing it up to 80 is also useless. This message is sent to ROS_INFO @566th line of the estimator.cpp of your code, I got that the problem is with initialization process but I cannot figure out how to solve it. The code belongs to VINS-Fusion-gpu.

Also I have realized that in your rosNodeTest.cpp code, feature_callback function feeds an estimator object with estimator.inputFeature(t,feature) but /feature_tracker/feature topic has no advertiser, I don't think this is the cause of the problem because this topic was also void when I ran euroc examples.

Finally, is just publishin IMU with linear accelerations and angular velocity enough? I could not figure out that should I pre integrate imu and publish it in another topic just for initalization process or not.

note: I have opened this issue in Vins-Fusion-gpu part too, I will remove quickly when I have the answer,

Thanks for your help Metin

RigerLee commented 5 years ago

I recommend that you take a look at Estimator::relativePose(Matrix3d &relative_R, Vector3d &relative_T, int &l) and make some debug prints, then you may find out why this function returns false.

mmp52 commented 5 years ago

Okay, I have already been doing that but had no result useful, but I will keep trying. I have a few other questions:

  1. I have realized that in your rosNodeTest.cpp code, feature_callback function feeds an estimator object in estimator.inputFeature(t,feature) but /feature_tracker/feature topic has no advertiser, I don't think this is the cause of the problem because this topic was also void when I ran euroc examples, but is it somehow related in a way , I mean should /feature_tracker/feature topic has an advertiser?
  2. Is publishing a raw image and IMU readings enough to start estimator. or should I create some kind of feature point cloud and publish it? In the rviz I cannot see any point clouds when I start the system, even in the initialization process, maybe that is the cause.
  3. In the config files, is body_T_cam a transformation matrix transforming Imu reference frame to camera reference frame, or vice versa?
  4. Is this transformation matrix (4x4) composed of a 3x3 rotation matrix and 3x1 translation vector augmented side by side, and a row of (0,0,0,1) added to bottom?

thank you for your help! Metin

RigerLee commented 5 years ago

It's not my code, but I hope that my answer helps.

  1. feature_callback is not used for stream data in your case.
  2. Publishing IMU and raw image is enough. Note that you should modify topics and extrinsics in the config file accordingly.
  3. If you use kalibr, body_T_cam in vins should be the "T_ic: (cam0 to imu0):" term. If you are not using it, have a try.
  4. Yep.
mmp52 commented 5 years ago

Dear RigerLee,

Thank you for your previous answers, After debugging in Estimator::relativePose function I have figured out that I have an average parallax of "nan", and then I printed my "corres" vector's size and I saw reasonable values in between 25-100+. Then I have printed out every parallax in the loop, when Estimator::relativePose is calculating the sum_parallax (which is used to calculate average_parallax). I do not know why but while most of the parallax'es in the sum has reasonable values (comparable with the euroc datasets') I have a few of them as "nan"s and that cause all the trouble and result in a nan average. What can be the reason having a "nan" parallax inside the corres? Having these non result in false return value of the relativePose(),

thank you for your help,

Sincerely Yours, Metin

mmp52 commented 5 years ago

Dear @RigerLee , @shaojie , @dvorak0, @pjrambo and @xuhao1 ,

I had the error "Not enough features or parallax; Move device around" when running VINS Mono+IMU on tx2. I have a 1280*720 camera. I have an initial guess of the transformation matrix, even though I try the algorithm with estimate_extrinsic = 0 , 1 or 2 none of them works. I have tried to increase the feature number to 400, it also did not work. Moving the device was also useless for me. I have checked the IMU signal and visualized camera track, camera seems to detect the features well and IMU is working at around 50 hz, increasing it up to 80 is also useless.

Then I have started debugging from estimator::relativePose and I have figured out the NaN values are created when the trackImage() is called. I have realized that FeatureTracker::trackImage returns a featureFrame with NaN values on the X,Y and velocity_x , velocity_y components. What can be the reason?

thank you for your help,

Sincerely Yours, Metin

RigerLee commented 5 years ago

Hi Metin, If you have euroc example running on TX2 successfully, it should be camera-IMU calibration problem. You should try https://github.com/ethz-asl/kalibr and use "T_ic: (cam0 to imu0):" term. Generally speaking, there's no much difference running on TX2 or PC. I've tried TX2 and it works fine for me.

MaxChanger commented 4 years ago

@mmp52 Have you successfully solved this problem, I encountered the same error as you. I use my own dataset(Azure Kinect) and it runs successfully on VINS-Mono, but does not work on VINS-Fusion, "Not enough features or parallax; Move device around". In Estimator::relativePose(...), the parallax, sum_parallax and average_parallax are nan or -inf sometimes. I found out that the problem occurred because the corres coordinates in the FeatureManager had been wrong, but all this is normal in Vins-Mono.

mmp52 commented 4 years ago

@MaxChanger , Yes I have solved the problem finally - I have re-calibrated my camera using a KANNALA-BRANDT model instead of fisheye-mei model. Reading this https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/issues/57 inspired me, I believe there are more people having this problem but there is no answer or suggestion from the original authors.

MaxChanger commented 4 years ago

@mmp52 thank you for your reply. Let me confirm, you just modified the camera model and intrinsic, and did not re-calibrate the intrinsic of imu or extrinsic of camera-imu, right? For example these two files: https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/blob/0c3206941410723b8c62b7b0c6a6189b38ae7d99/config/realsense_d435i/realsense_stereo_imu_config.yaml#L13 https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/blob/0c3206941410723b8c62b7b0c6a6189b38ae7d99/config/realsense_d435i/right.yaml#L1-L16

mmp52 commented 4 years ago

@MaxChanger I've done the intrinsic calibration of camera from scratch with kannala-brandt camera-model option, then I have used the resulting new yaml. I was using estimate_extrinsic property of the VINS-Fusion previously (with manually calculated initial guess for body_T_cam0 matrix), I kept doing so. I did not change anything in intrinsics of IMU. Before changing camera model, setting estimate_extrinsic=0 or =1 did not help system work.

mmp52 commented 4 years ago

@MaxChanger I haven't tried yet with stereo though, it seems too much computational power for the tx2 (my camera is 1280*720), but I guess the solution should be the same. In the Fusion part do you use also a gps like global fusion? Just asking for curiosity

MaxChanger commented 4 years ago

Sorry, at present, I haven't tried stereo and GPS yet. I want to run this repo with Azure Kinect. Interestingly, the same bag dataset collected by kinect, vins-mono does not have this error, but vins fusion did.

mmp52 commented 4 years ago

I do not know the reason for it to work on mono but not on Vins-Fusion but in the issue named "Facing problems using VINS-Fusion on T265 realsense stereo fisheye camera" of VINS-Fusion they mention that they had found out Vins-Fusion does not support fisheye mask. https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/issues/57#issuecomment-532983824 I gave the link in the upper comments but link does not open it, maybe this time it will work. Have you tried to increase the optimization parameters' values in the config file? If you are also working on Tx2, do you believe it might be enough to compute stereo global fusion in real time? Also does Azure kinect give two images+imu values synchronized ?

MaxChanger commented 4 years ago

Azure only provides one color camera and one depth camera, I don't think they can be used as stereo. At the same time, Kinect is not equipped with a fisheye lens, so I don't know if issue# 57 can solve my problem, but I will try to recalibrate Kinect RGB cmrea instead of using the intrinsic provided by SDK.

Regarding Tx2, I'm sorry I don't have a device with me now, so I can't test the code on it, but if it was me, I might use catkin_make -DCMAKE_BUILD_TYPE=Release to build a release version for testing. Also don't use all threads when compiling to prevent completely stuck.

mzahana commented 4 years ago

@mmp52 Can you please share how you did camera calibration with kannala-brandt model? I am using stereo camera T265. Which package you used for that? I couldn't find this model in Kalibr.

Cheers.

mmp52 commented 4 years ago

@mzahana , I have used the standard calibration pack that was offered by the VINS, camodocal. If you want to use kannala - brandt calibration method instead of the mei for example, you can indicate it by writing down: rosrun camera_models Calibrations -w your_#ofvertsquares -h your_#ofhorizsquares -s your_square_size -i your_data_folder --camera-model kannala_brandt Camodocal package is included in the VINS, if you want to find out more things on it the following header file gives general info and the cpp code afterwards is more detailed: header cpp code calibration packace itself Kannala-Brandt Method (pdf) Let me know if it is not so clear