Open devbharat opened 8 years ago
And in the calculation of medianDepthParameters
if(featureOutput_.c().isInFront()){
transformFeatureOutputCT_.transformCovMat(state_, cov_, featureOutputCov_);
const double uncertainty = sqrt(featureOutputCov_(2,2))*featureOutput_.d().getDistanceDerivative();
const double depth = fsm_.features_[i].mpDistance_->getDistance();
std::cout<<"uncertainty/depth "<<uncertainty/depth<<" \n";
if(uncertainty/depth > maxUncertaintyToDistanceRatio){
distanceParameterCollection[camID].push_back(featureOutput_.d().p_);
}
}
shouldn't we make sure we are comparing absolute values ?
if(featureOutput_.c().isInFront()){
transformFeatureOutputCT_.transformCovMat(state_, cov_, featureOutputCov_);
const double uncertainty = fabs(sqrt(featureOutputCov_(2,2))*featureOutput_.d().getDistanceDerivative());
const double depth = fsm_.features_[i].mpDistance_->getDistance();
std::cout<<"uncertainty/depth "<<uncertainty/depth<<" \n";
if(uncertainty/depth > maxUncertaintyToDistanceRatio){
distanceParameterCollection[camID].push_back(featureOutput_.d().p_);
}
}
as the derivative can be negative.
Sorry for putting it piecemeal, but the sign for comparing the uncertainty should be opposite as well (scaled uncertainty less than threshold make a feature depth eligible)
if(featureOutput_.c().isInFront()){
transformFeatureOutputCT_.transformCovMat(state_, cov_, featureOutputCov_);
const double uncertainty = fabs(sqrt(featureOutputCov_(2,2))*featureOutput_.d().getDistanceDerivative());
const double depth = fsm_.features_[i].mpDistance_->getDistance();
std::cout<<"uncertainty/depth "<<uncertainty/depth<<" \n";
if(uncertainty/depth < maxUncertaintyToDistanceRatio){
distanceParameterCollection[camID].push_back(featureOutput_.d().p_);
}
}
@raghavkhanna this solves the issue your student mentioned in his presentation about ROVIO diverging when coming very close to ground(on landing). It does in my datasets.
I'll send in a PR if it looks fine to you all.
This is the position plot of one of the wingtra hover flights indoors with VICON. Notice the ending region close to the ground where ROVIO would diverge earlier:
Position (Vicon altitude is true position above ground. The aircraft was strung 35cm above ground pre and post flight).
Errors:
Vanilla ROVIO with above mentioned changes without any tags. 72 degree lens, home-made VI sensor ;-)
this solves the issue your student mentioned in his presentation about ROVIO diverging when coming very close to ground(on landing). It does in my datasets.
I agree. The changes make sense to me, and I observe this issue on my PX4 setup, err "Home-made VI-sensor" :)
@devbharat Could you quickly share your diff or send in a PR?
thanks for the comments. I think I saw this bug and fixed it in one of the branches I am currently working on. I'll try to make a PR as soon as possible (by Wednesday). I am still having issues with the yaml-cpp that doesn't compile on jenkins (and no time to fix it)
the wrong sign as well as the fabs are integrated in the newest PR. I think the consideration of other frames for the computation of the median depth was already implemented (maybe I understood something wrongly). including median depths from previous frames would definitely be an interesting extension...
could somebody maybe check how the performance for take-off and landing is once the new PR is merged? depending on the scenario maxUncertaintyToDepthRatioForDepthInitialization might have to be increased.
I can give it a flight next week. Are you referring to the PR which just went into master today? On 2 Mar 2016 11:31 pm, "bloesch" notifications@github.com wrote:
the wrong sign as well as the fabs are integrated in the newest PR. I think the consideration of other frames for the computation of the median depth was already implemented (maybe I understood something wrongly). including median depths from previous frames would definitely be an interesting extension...
could somebody maybe check how the performance for take-off and landing is once the new PR is merged? depending on the scenario maxUncertaintyToDepthRatioForDepthInitialization might have to be increased.
— Reply to this email directly or view it on GitHub https://github.com/ethz-asl/rovio/issues/40#issuecomment-191351503.
i am refering to the pull request which I created today (#42), I'll merge it by the end of the week.
I see that the PR is merged, but I'll go through it and put it through tests today anyways.
thanks for that! there should be some performance improvements when compared to the old version. if you could have a look at the median depth initialization would be great. I sometimes observed that desabling median depth initialization improved accuracy, but this could also be due to the additional prior you get from the regular initialization.
I had to tune the parameters a little (I think the MahalanobisTh of 2.221 was too less atleast for my dataset, also turned off discriminativeSampling and increased patchRejectionTh), basically everything to make feature tracking a little more robust and not lose too many features too often. I also needed to keep the median depth initialization as a couple of times it does lose too many features, and initializing depth with init depth doesn't cut it as it is often at a different altitude.
The results are amazing!
Results WITHOUT including Tag information:
This is awesome!
@devbharat Can you share your parameters? I've been getting worse results since the last 2 PRs went in, so a tuned parameter set would be a good baseline for me. On 8 Mar 2016 8:53 pm, "Bharat Tak" notifications@github.com wrote:
I had to tune the parameters a little (I think the MahalanobisTh of 2.221 was too less atleast for my dataset, also turned off discriminativeSampling and increased patchRejectionTh), basically everything to make feature tracking a little more robust and not lose too many too often. The results are amazing!
Results WITHOUT including Tag information: [image: iekf_rovio_pose] https://cloud.githubusercontent.com/assets/3105122/13605909/0c7a5c6a-e54a-11e5-8192-2123b2546a08.png
[image: iekf_rovio_errx] https://cloud.githubusercontent.com/assets/3105122/13605910/10e26dce-e54a-11e5-9d44-ac5e63a9bf4c.png [image: iekf_rovio_erry] https://cloud.githubusercontent.com/assets/3105122/13605912/10fb461e-e54a-11e5-869e-bb14a34d2e2e.png [image: iekf_rovio_errz] https://cloud.githubusercontent.com/assets/3105122/13605911/10f64e34-e54a-11e5-8ead-bbd5a7a557f3.png
— Reply to this email directly or view it on GitHub https://github.com/ethz-asl/rovio/issues/40#issuecomment-193822628.
This is the one I am using. Calibration params might be different for you. Most critical values are in the Image update section.
; You can partially override parameter set in this file by creating your own subset of parameter in a separate info-file and include it with:
; #include "/home/user/workspace/rovio/cfg/rovio_custom.info"
Common
{
doVECalibration true; Should the camera-IMU extrinsics be calibrated online
depthType 1; Type of depth parametrization (0: normal, 1: inverse depth, 2: log, 3: hyperbolic)
verbose false; Is the verbose active
}
Camera0
{
CalibrationFile /home/bharat/Documents/Master_Thesis/softwares/catkin_workspace/src/rovio/cfg/phycamsv10_newcalib_radtan.yaml
qCM_x 0.7060
qCM_y 0.7082
qCM_z 0.0005
qCM_w -0.0009
MrMC_x -0.00159984
MrMC_y -0.00058784
MrMC_z -0.00099081
}
Camera1
{
CalibrationFile /home/bharat/Documents/Master_Thesis/softwares/catkin_workspace/src/rovio/cfg/phycamsv10.yaml
qCM_x 0.0
qCM_y 0.0
qCM_z 0.7071
qCM_w 0.7071
MrMC_x 0
MrMC_y 0
MrMC_z 0
}
Init
{
State
{
pos_0 0; X-entry of initial position (world to IMU in world) [m]
pos_1 0; Y-entry of initial position (world to IMU in world) [m]
pos_2 0; Z-entry of initial position (world to IMU in world) [m]
vel_0 0; X-entry of initial velocity (robocentric, IMU) [m/s]
vel_1 0; Y-entry of initial velocity (robocentric, IMU) [m/s]
vel_2 0; Z-entry of initial velocity (robocentric, IMU) [m/s]
acb_0 0; 0.22; X-entry of accelerometer bias [m/s^2]
acb_1 0; 0.15; Y-entry of accelerometer bias [m/s^2]
acb_2 0; 0.13; Z-entry of accelerometer bias [m/s^2]
gyb_0 0; -0.011; X-entry of gyroscope bias [rad/s]
gyb_1 0; -0.016; Y-entry of gyroscope bias [rad/s]
gyb_2 0; Z-entry of gyroscope bias [rad/s]
att_x 0; X-entry of initial attitude (IMU to world, JPL)
att_y 0; Y-entry of initial attitude (IMU to world, JPL)
att_z 0; Z-entry of initial attitude (IMU to world, JPL)
att_w 1; W-entry of initial attitude (IMU to world, JPL)
}
Covariance
{
pos_0 0.0001; X-Covariance of initial position [m^2]
pos_1 0.0001; Y-Covariance of initial position [m^2]
pos_2 0.0001; Z-Covariance of initial position [m^2]
vel_0 1.0; X-Covariance of initial velocity [m^2/s^2]
vel_1 1.0; Y-Covariance of initial velocity [m^2/s^2]
vel_2 1.0; Z-Covariance of initial velocity [m^2/s^2]
acb_0 4e-4; X-Covariance of initial accelerometer bias [m^2/s^4]
acb_1 4e-4; Y-Covariance of initial accelerometer bias [m^2/s^4]
acb_2 4e-4; Z-Covariance of initial accelerometer bias [m^2/s^4]
gyb_0 3e-4; X-Covariance of initial gyroscope bias [rad^2/s^2]
gyb_1 3e-4; Y-Covariance of initial gyroscope bias [rad^2/s^2]
gyb_2 3e-4; Z-Covariance of initial gyroscope bias [rad^2/s^2]
vep 0.01; Covariance of initial linear camera-IMU extrinsics, same for all entries [m^2]
att_0 0.1; X-Covariance of initial attitude [rad^2]
att_1 0.1; Y-Covariance of initial attitude [rad^2]
att_2 0.1; Z-Covariance of initial attitude [rad^2]
vea 0.01; Covariance of initial rotational camera-IMU extrinsics, same for all entries [rad^2]
}
}
ImgUpdate
{
updateVecNormTermination 1e-4;
maxNumIteration 20;
doPatchWarping true; Should the patches be warped
doFrameVisualisation true; Should the frames be visualized
visualizePatches true; Should the patches be visualized
useDirectMethod true; Should the EKF-innovation be based on direct intensity error (o.w. reprojection error)
;innovationInterpolationFactor 1.0; How much should be used from the direct or indirect error terms. Value must be between 0 and 1.
startLevel 2; Highest patch level which is being employed (must be smaller than the hardcoded template parameter)
endLevel 1; Lowest patch level which is being employed
nDetectionBuckets 100; Number of discretization buckets used during the candidates selection
MahalanobisTh 10.88; orig2.21; Mahalanobis treshold for the update, 5.8858356
UpdateNoise
{
nor 1e-5;
pix 1;2 orig Covariance used for the reprojection error, 1/focal_length is roughly 1 pixel std [rad] (~1/f ~ 1/400^2 = 1/160000)
int 10;10000 originally Covariance used for the photometric error [intensity^2]
dist_0 0.05;
}
initCovFeature_0 0.2; Covariance for the initial distance (in the selected parametrization)
initCovFeature_1 1e-5; Covariance for the initial bearing vector, x-component [rad]
initCovFeature_2 1e-5; Covariance for the initial bearing vector, y-component [rad]
initCovTagFeature_0 0.1; Covariance for the tag corners initial distance (in the selected parametrization)
initCovTagFeature_1 1e-5; Covariance for the tag corners initial bearing vector, x-component [rad]
initCovTagFeature_2 1e-5; Covariance for the tag corners initial bearing vector, y-component [rad]
initDepth 2.85;0.5 orig Initial value for the initial distance parameter
startDetectionTh 0.9; Threshold for detecting new features (min: 0, max: 1)
scoreDetectionExponent 0.25; Exponent used for weighting the distance between candidates
penaltyDistance 50; Maximal distance which gets penalized during bucketing [pix]
zeroDistancePenalty 100; Penalty for zero distance (max: nDetectionBuckets)
statLocalQualityRange 10; Number of frames for local quality evaluation
statLocalVisibilityRange 100; Number of frames for local visibility evaluation
statMinGlobalQualityRange 100; Minimum number of frames for obtaining maximal global quality
trackingUpperBound 0.9; Threshold for local quality for min overall global quality
trackingLowerBound 0.8; Threshold for local quality for max overall global quality
minTrackedAndFreeFeatures 0.75; Minimum of amount of feature which are either tracked or free
removalFactor 1.1; Factor for enforcing feature removal if not enough free
minRelativeSTScore 0.75; Minimum relative ST-score for extracting new feature patch
minAbsoluteSTScore 5.0; Minimum absolute ST-score for extracting new feature patch
minTimeBetweenPatchUpdate 1.0; Minimum time between new multilevel patch extrection [s]
fastDetectionThreshold 20; Fast corner detector treshold while adding new feature
alignConvergencePixelRange 10; Assumed convergence range for image alignment (gets scaled with the level) [pixels]
alignCoverageRatio 2;2 ORIG How many sigma of the uncertainty should be covered in the adaptive alignement
alignMaxUniSample 1; Maximal number of alignment seeds on one side -> total number of sample = 2n+1. Carefull can get very costly if diverging!
;matchingPixelThreshold 0.0; Below this treshold no pre-alignment is carried out [pixels]
;minNoAlignment 5; Minimal number of alignment every feature must make through.
;specialLinearizationThreshold 0.0; Below this treshold no special linearization point gets computed [pixel]
patchRejectionTh 60.0; 50 orig If the average itensity error is larger than this the feauture is rejected [intensity], if smaller 0 the no check is performed
alignmentHuberNormThreshold -1; Intensity error threshold for Huber norm (enabled if > 0)
alignmentGaussianWeightingSigma -1; Width of Gaussian which is used for pixel error weighting (enabled if > 0)
alignmentGradientExponent 0.0; Exponent used for gradient based weighting of residuals
useIntensityOffsetForAlignment true; Should an intensity offset between the patches be considered
useIntensitySqewForAlignment true; Should an intensity sqew between the patches be considered
;bearingVectorMahalTh 30.21; Threshold for the aligned patch to be accepted as inlier
removeNegativeFeatureAfterUpdate true; Should feature with negative distance get removed
maxUncertaintyToDepthRatioForDepthInitialization 0.3;0.3 orig If set to 0.0 the depth is initialized with the standard value provided above, otherwise ROVIO attempts to figure out a median depth in each frame
useCrossCameraMeasurements true; Should cross measurements between frame be used. Might be turned of in calibration phase.
doStereoInitialization true; Should a stereo match be used for feature initialization.
noiseGainForOffCamera 10.0; Factor added on update noise if not main camera
discriminativeSamplingDistance 0.0;0.02 orig Sampling distance for checking discriminativity of patch (if <= 0.0 no check is performed).
discriminativeSamplingGain 2.0; 2.0 orig Gain for threshold above which the samples must lie (if <= 1.0 the patchRejectionTh is used).
MotionDetection
{
isEnabled 0; Is the motion detection enabled
rateOfMovingFeaturesTh 0.5; Amount of feature with motion for overall motion detection
pixelCoordinateMotionTh 1.0; Threshold for motion detection for patched [pixels]
minFeatureCountForNoMotionDetection 5; Min feature count in frame for motion detection
}
ZeroVelocityUpdate
{
UpdateNoise
{
vel_0 0.01; X-Covariance of zero velocity update [m^2/s^2]
vel_1 0.01; Y-Covariance of zero velocity update [m^2/s^2]
vel_2 0.01; Z-Covariance of zero velocity update [m^2/s^2]
}
MahalanobisTh0 7.689997599999999; Mahalanobid distance for zero velocity updates
minNoMotionTime 1.0; Min duration of no-motion
isEnabled 0; Should zero velocity update be applied, only works if MotionDetection.isEnabled is true
}
}
Prediction
{
PredictionNoise
{
pos_0 1e-4; X-covariance parameter of position prediction [m^2/s]
pos_1 1e-4; Y-covariance parameter of position prediction [m^2/s]
pos_2 1e-4; Z-covariance parameter of position prediction [m^2/s]
vel_0 4e-4; 4e-6 X-covariance parameter of velocity prediction [m^2/s^3]
vel_1 4e-4; Y-covariance parameter of velocity prediction [m^2/s^3]
vel_2 4e-4; Z-covariance parameter of velocity prediction [m^2/s^3]
acb_0 1e-8; X-covariance parameter of accelerometer bias prediction [m^2/s^5]
acb_1 1e-8; Y-covariance parameter of accelerometer bias prediction [m^2/s^5]
acb_2 1e-8; Z-covariance parameter of accelerometer bias prediction [m^2/s^5]
gyb_0 3.8e-7; X-covariance parameter of gyroscope bias prediction [rad^2/s^3]
gyb_1 3.8e-7; Y-covariance parameter of gyroscope bias prediction [rad^2/s^3]
gyb_2 3.8e-7; Z-covariance parameter of gyroscope bias prediction [rad^2/s^3]
vep 1e-8; Covariance parameter of linear extrinsics prediction [m^2/s]
att_0 7.6e-6; 7.6e-7 X-Covariance parameter of attitude prediction [rad^2/s]
att_1 7.6e-6; Y-Covariance parameter of attitude prediction [rad^2/s]
att_2 7.6e-6; Z-Covariance parameter of attitude prediction [rad^2/s]
vea 1e-8; Covariance parameter of rotational extrinsics prediction [rad^2/s]
dep 0.0001; Covariance parameter of distance prediction [m^2/s]
nor 0.00001; Covariance parameter of bearing vector prediction [rad^2/s]
}
MotionDetection
{
inertialMotionRorTh 0.1; Treshold on rotational rate for motion detection [rad/s]
inertialMotionAccTh 0.5; Treshold on acceleration for motion detection [m/s^2]
}
}
PoseUpdate
{
UpdateNoise
{
pos_0 0.5; X-Covariance of linear pose measurement update [m^2]
pos_1 0.5; Y-Covariance of linear pose measurement update [m^2]
pos_2 0.01; Z-Covariance of linear pose measurement update [m^2]
att_0 10.1; X-Covariance of rotational pose measurement update [rad^2]
att_1 10.1; Y-Covariance of rotational pose measurement update [rad^2]
att_2 10.1; Z-Covariance of rotational pose measurement update [rad^2]
}
init_cov_IrIW 1; Covariance of initial pose between inertial frames, linear part [m^2]
init_cov_qWI 1; Covariance of initial pose between inertial frames, rotational part [rad^2]
init_cov_MrMV 1; Covariance of initial pose between inertial frames, linear part [m^2]
init_cov_qVM 1; Covariance of initial pose between inertial frames, rotational part [rad^2]
pre_cov_IrIW 1e-4; Covariance parameter of pose between inertial frames, linear part [m^2/s]
pre_cov_qWI 1e-4; Covariance parameter of pose between inertial frames, rotational part [rad^2/s]
pre_cov_MrMV 1e-4; Covariance parameter of pose between inertial frames, linear part [m^2/s]
pre_cov_qVM 1e-4; Covariance parameter of pose between inertial frames, rotational part [rad^2/s]
MahalanobisTh0 12.6511204; Mahalonobis distance treshold of pose measurement
doVisualization true; Should the measured poses be vizualized
enablePosition true; Should the linear part be used during the update
enableAttitude false; Should the rotation part be used during the update (e.g. set to fals eif feeding GPS-measurement)
noFeedbackToRovio false; By default the pose update is only used for aligning coordinate frame. Set to false if ROVIO should benefit frome the posed measurements.
doInertialAlignmentAtStart true; Should the transformation between I and W be explicitly computed and set with the first pose measurement.
timeOffset 9.9; Time offset added to the pose measurement timestamps
imu_timedelay 0.0; Artificially added IMU Timedelay [s] 0.0628505901891
qVM_x 0.0; X-entry of quaterion representing IMU to reference body coordinate frame of pose measurement (JPL)
qVM_y 1.0; Y-entry of quaterion representing IMU to reference body coordinate frame of pose measurement (JPL)
qVM_z 0.0; Z-entry of quaterion representing IMU to reference body coordinate frame of pose measurement (JPL)
qVM_w 0.0; W-entry of quaterion representing IMU to reference body coordinate frame of pose measurement (JPL)
MrMV_x 0; X-entry of vector representing IMU to reference body coordinate frame of pose measurement
MrMV_y 0; Y-entry of vector representing IMU to reference body coordinate frame of pose measurement
MrMV_z 0; Z-entry of vector representing IMU to reference body coordinate frame of pose measurement
qWI_x 0.0; X-entry of quaterion representing World to reference inertial coordinate frame of pose measurement (JPL)
qWI_y 0.0; Y-entry of quaterion representing World to reference inertial coordinate frame of pose measurement (JPL)
qWI_z 0.0; Z-entry of quaterion representing World to reference inertial coordinate frame of pose measurement (JPL)
qWI_w 1.0; W-entry of quaterion representing World to reference inertial coordinate frame of pose measurement (JPL)
IrIW_x 0.5; X-entry of vector representing World to reference inertial coordinate frame of pose measurement
IrIW_y -0.5; Y-entry of vector representing World to reference inertial coordinate frame of pose measurement
IrIW_z 0; Z-entry of vector representing World to reference inertial coordinate frame of pose measurement
}
Thanks for the testing. I quickly had a look at your parameters.
I did some quick tuning on one of the euroc dataset and came up with some intermediate paramterers. Maybe you could have a brief go with them. There are included in the recent PR #46.
@bloesch @devbharat With Rovio current master, I'm seeing pretty bad performance with the default values from rovio.info
I also tried this https://github.com/ethz-asl/rovio/pull/46 without much improvement.
There is also this issue where Rovio just hangs up, and stops publishing TF, etc. I haven't done a proper bisection yet (will do tomorrow) but I see that the using master before PRs https://github.com/ethz-asl/rovio/pull/42, https://github.com/ethz-asl/rovio/pull/34 works fine. Used with params from master back then.
I can record a bag and send you, if it would help.
Did you test @devbharat 's parameters? if you still get bad performance it would be interesting if you could provide a rosbag dataset (together with .yaml and .info).
@bloesch Irrespective of the parameter set, Rovio just hangs up, and stops publishing TF after running some time.
never observed that, does the tracking diverge before it starts hanging?
I guess sharing the bag file etc would be best.
@devbharat @bloesch Here you go :)
https://www.dropbox.com/s/s2z4h46we45n0en/rovio_mav2.tar.gz?dl=0
Sorry for the delay, I ran out of high-speed internet access :P
Also,
does the tracking diverge before it starts hanging?
No, not really.
Thanks for the dataset, I will have a look as soon as I find some time.
@bloesch Any time to check it out? :)
I've been trying to work on this as well, but compiling is crazy on current master. It takes around 20 mins every time. That's just weird. I'm using catkin tools.
I've checked the dataset - works fine on my side. had to change the info file since it missed a couple of changes required for the newer versions.
New in ImgUpdate: updateVecNormTermination 1e-4; maxNumIteration 20; useIntensityOffsetForAlignment true; useIntensitySqewForAlignment true; noiseGainForOffCamera 10.0; alignmentHuberNormThreshold 10; alignmentGaussianWeightingSigma -1; alignmentGradientExponent 0.0; discriminativeSamplingDistance 0.02; discriminativeSamplingGain 1.1; UpdateNoise ->pix 2;
Other Changes: startLevel 2; MahalanobisTh 9.21; UpdateNoise ->int 400; penaltyDistance 50; trackingLowerBound 0.8; patchRejectionTh 50;
Like this the info file is more or less identical to the one on the current master branch. Also, during the test I was using the rosbag loader node and the standard template settings (so only 1 camera).
I'll cross check with those settings. Lets see how that goes.
Otherwise - I was just using the normal node and replaying the rosbag. Also - I was using 2 cameras. Can you please retry with a 2 camera setup ?
The bagfile has synced stereo/IMU streams.
i've checked the stereo setup, the issue is that rovio struggles at finding stereo pairs, which ends up being very time consuming. it is not real-time but it does not diverge. could you please double-check your calibration?
you could set useCrossCameraMeasurements to false, but then every camera has its own set of feature and no cross matches are searched.
if the problem continues to persist one could implement a mode where the frequency of cross-camera searches is reduced for a feature which tends to fail matching.
@bloesch I have checked the camera calibration, and recalibrated multiple times as well.
I consistently get good results with master@https://github.com/ethz-asl/rovio/commit/61cde9dc5845963d7782e0558ed46d1a4dde084d
Basically just reverting to that commit and recompiling gives me a perfectly working, realtime system (tested on the same bagfile). No divergence/slowdowns.
With current master + your settings, stereo is very slow and eventually fails. Even mono tracking seems very bad, with the patches not staying 'locked on'. I'm running this on a slightly older Intel i5, 3rd gen system. On the actual vehicle (i7, 4th gens) it performs slightly better, but nowhere close to real time and eventually still diverges.
I did this video with the older commit head and same rovio.info (and calibration) as the attached one: https://youtu.be/1YTOmWUwer4?list=PLfb6rm3uySr6YxoMJVYhDzM7Qf6hMgG2j Performance was great! I doubt that its a calibration issue. Reprojection errors, etc. were also very low.
If it works with the old master then the calibration should be fine.
I think the issue is related to the slowdowns. Rovio is not fast enough to process all the data and eventually starts throwing measurements away, eventually leading to divergence. Too check this you can use the data loader node (max speed, but not necessarily real-time).
There are a couple of things which became more costly in the newer version. I think I will have to make another iteration on rovio. I might also get rid of all the templating stuff in order to make the code more readable and to avoid such long compilation time. However, I won't be able to do this before June/July.
In the meantime you can either stick with the old commit, or try to reduce the computational load. One thing I tested with your dataset was reducing the patch size to 4x4 and the image levels to 3 (compile time parameters). In the stereo case you can also dia ble the cross-camera matches which are rather computationally demanding.
In image update
Two things to get better initialization estimate when a new frame loses too many features:
This handles the vicious cycle where I suddenly lose most features in the current frame so its mediandepth for initialization is not good enough, which leads to bad depth initialization of all new features which leads to tracking failure for most of them in the subsequent imgUpdate and so on till it diverges. This for example happened when I am about to land close to the ground. I tested this strategy and it works better than the one we currently have.