Closed JiatianWu closed 5 years ago
Thanks @JiatianWu. I have not done such a comparison so far, because ROVIO (robust VIO) is implemented with a SLAM framework using iterated EKF and patch-based (photometric) measurement model which is totally different with R-VIO. So, the experiments need to be carefully designed in order to have a rigorous comparison. For the evaluation on benchmark dataset, I recommend this evaluation tool rpg_trajectory_evaluation which I also used in my paper.
Thank you very much!
Does the output means imu's pose (quaternion and position) in the world frame?
[ INFO] [1563838624.019541019]: qkG: 0.759310, -0.299362, 0.532467, 0.224299
[ INFO] [1563838624.019610277]: pGk: -0.462538, 0.162422, 0.106228
I kept noticing that for the same dataset, RVIO outputs different results each time and the position diverges relatively large with ground truth pose.
For example, I replay V1_02_medium dataset for three times, below is what VIO outputs at last timestamp:
first time
[ INFO] [1563841862.584924902]: qkG: 0.813190, -0.022145, 0.581020, 0.025457
[ INFO] [1563841862.585027409]: pGk: 0.315739, -0.190942, 0.130115
second time
[ INFO] [1563842029.229304547]: qkG: 0.815993, -0.019634, 0.577180, 0.025163
[ INFO] [1563842029.229380632]: pGk: 0.127791, -0.166652, 0.323582
third time
[ INFO] [1563842192.546398699]: qkG: 0.816051, -0.025197, 0.576756, 0.027906
[ INFO] [1563842192.546473101]: pGk: 0.115808, -0.111901, 0.266111
But the ground truth pose given by Euroc is:
p (x, y, z): 0.524978 1.98719 0.971496
q (x, y, z, w):0.7901 -0.2069 0.5545 0.1592
Yes, the output is the IMU pose in the world frame, however, this world frame is not the same "world frame" of the ground truth, i.e., they use different frames of reference. This is why the ending pose is different with that of the ground truth. In order to do evaluation, the R-VIO's output should be compared with the ground truth after 6-DOF alignment, for which you may refer to the tool that I told you in my last reply.
Hi Huai, I evaluated the latest RVIO on Euroc datasets and found that several datasets mismatched performance proposed in the paper. So I evaluated on Euroc V1_1_easy, V1_2_medium, V2_1_easy, and V2_2_medium. I used the default parameters provided in your repo. V2_1_easy dataset achieves really good results, but the other three perform weird. Below shows each performance:
V1_1_easy absolute_err_statistics_posyaw_-1
rot:
max: 0.08933021225761871
mean: 0.054473494356535604
median: 0.055712474301888096
min: 0.029637790259560155
num_samples: 2738
rmse: 0.0551126928128585
std: 0.008369427798421487
scale:
max: 1.5140730784442527
mean: 0.05848540716951807
median: 0.030040909486390932
min: -0.44576301341258995
num_samples: 2738
rmse: 0.15982739566544812
std: 0.14874223863252628
trans:
max: 6.0095962744367615
mean: 0.40374009767014646
median: 0.15326950271724532
min: 0.0060787315104680805
num_samples: 2738
rmse: 0.905058255968657
std: 0.8100150493850887
V1_2_medium absolute_err_statistics_posyaw_-1
rot:
max: 2.7641308535939175
mean: 0.027832418869228322
median: 0.025779782718659768
min: 0.008097689373074765
num_samples: 1591
rmse: 0.07425328425299947
std: 0.06883971733123664
scale:
max: 0.23393020848711998
mean: -0.0024639098190459285
median: -0.011160039533700705
min: -0.09753832578911326
num_samples: 1591
rmse: 0.051199899930780605
std: 0.0511405797906668
trans:
max: 3.3436031572603286
mean: 0.18798938106278199
median: 0.1568422046454815
min: 0.03319670069123485
num_samples: 1591
rmse: 0.23351402914492783
std: 0.13852362403261892
V2_1_easy absolute_err_statistics_posyaw_-1
rot:
max: 0.06317277995223812
mean: 0.03340989363171005
median: 0.03992941150839055
min: 0.0037895025369732295
num_samples: 2125
rmse: 0.036009734689737276
std: 0.013434284496879194
scale:
max: 0.07422606082453931
mean: 0.0014510771647384757
median: 0.005836547579988194
min: -0.07907369267089004
num_samples: 2125
rmse: 0.023497259576948454
std: 0.023452411020798484
trans:
max: 0.35124586799730506
mean: 0.08504650257289237
median: 0.08083916990525146
min: 0.015856977056523124
num_samples: 2125
rmse: 0.09400638111439162
std: 0.04005361519692398
V2_2_medium absolute_err_statistics_posyaw_-1
rot:
max: 0.11905136880840729
mean: 0.058471508104541575
median: 0.0569831477418241
min: 0.013122316525448345
num_samples: 2204
rmse: 0.059475795688582994
std: 0.010883612119631798
scale:
max: 0.5129127902435755
mean: 0.0350569152207618
median: 0.007165926822642277
min: -0.2747336543499602
num_samples: 2204
rmse: 0.13703508550237795
std: 0.132475006525187
trans:
max: 1.1377783313468757
mean: 0.40943637449290354
median: 0.3271083259147942
min: 0.07818912876140116
num_samples: 2204
rmse: 0.4758554527556255
std: 0.2424876639323496
Do you know what is the problem? Thank you very much.
Sorry I forgot to mention that when I play rosbag I set the play rate as default, which is full speed, I think slower play rate will improve the performance, what rate do you recommand?
First of all, to validate the results in my paper, here I show the single-run evaluation results on the same sequences that you used, where the values are pretty different with yours. I also ran R-VIO in real time (full speed) and used default settings provided in my repo (however, in practice you should change the values of parameters according to the motion profile).
V1_1_easy absolute_err_statisticsposyaw-1 rot: max: 0.11546159036192433 mean: 0.09985595099193521 median: 0.10092933763372572 min: 0.08007824912689389 num_samples: 2801 rmse: 0.10009345251834682 std: 0.006891174684970752 scale: max: 0.10801994273339743 mean: 0.010854202032131947 median: 0.0069311279336001785 min: -0.07977104197221874 num_samples: 2801 rmse: 0.03063091091570455 std: 0.028643306404315314 trans: max: 0.31743349831681533 mean: 0.10315212270325827 median: 0.09492436960166895 min: 0.010866362203827654 num_samples: 2801 rmse: 0.11594239392852757 std: 0.05293654967685192
V1_2_medium absolute_err_statisticsposyaw-1 rot: max: 0.05099824337986406 mean: 0.0351966478141372 median: 0.03397452874854015 min: 0.010639877323732208 num_samples: 1601 rmse: 0.0355622874251547 std: 0.005086479092349357 scale: max: 0.06281205010775937 mean: -0.0018648311959821304 median: -0.0076808737790491355 min: -0.03810668400238515 num_samples: 1601 rmse: 0.02354809865574968 std: 0.023474142261463177 trans: max: 0.3241622021357461 mean: 0.11928141512177137 median: 0.11506803113577181 min: 0.021260667786906964 num_samples: 1601 rmse: 0.13154014593710545 std: 0.055450464377700716
V2_1_easy absolute_err_statisticsposyaw-1 rot: max: 0.050861129639744936 mean: 0.029273860817354384 median: 0.032493393855938274 min: 0.00201169099673078 num_samples: 2111 rmse: 0.03186331683248744 std: 0.012582210950927541 scale: max: 0.0376493511356657 mean: -0.012438233644940282 median: -0.0035203179145399943 min: -0.10836055609098627 num_samples: 2111 rmse: 0.03155075870306628 std: 0.028995529285270657 trans: max: 0.25246734925911785 mean: 0.1039282332074172 median: 0.10691275967261597 min: 0.004110081483376396 num_samples: 2111 rmse: 0.1216575277463457 std: 0.0632414136443642
V2_2_medium absolute_err_statisticsposyaw-1 rot: max: 0.07364375195357868 mean: 0.02950799920652185 median: 0.024212268247136317 min: 0.004711234656791974 num_samples: 2267 rmse: 0.032905139399719445 std: 0.014561118835545398 scale: max: 0.24773928944726942 mean: 0.008174804129132369 median: -0.0007349538744670925 min: -0.11860240654627108 num_samples: 2267 rmse: 0.06705954358218671 std: 0.06655940927398259 trans: max: 0.5379469710631998 mean: 0.21670773612784036 median: 0.17828577617662228 min: 0.0384400312532186 num_samples: 2267 rmse: 0.2448796761822178 std: 0.11403426199811241
As those results were obtained on my laptop, there should be a little difference with the ones obtained on your device. This may also depend on how you record the outputs, for me I wrote the pose to a .dat (or .txt) file right after the state composition which was the same one showing on the terminal. Please note that I used this evaluation tool in my ijrr paper, while in my iros paper I used another one written by myself using different alignment, so please refer to the results in my ijrr paper.
First of all, to validate the results in my paper, here I show the single-run evaluation results on the same sequences that you used, where the values are pretty different with yours. I also ran R-VIO in real time (full speed) and used default settings provided in my repo (however, in practice you should change the values of parameters according to the motion profile).
V1_1_easy absolute_err_statisticsposyaw-1 rot: max: 0.11546159036192433 mean: 0.09985595099193521 median: 0.10092933763372572 min: 0.08007824912689389 num_samples: 2801 rmse: 0.10009345251834682 std: 0.006891174684970752 scale: max: 0.10801994273339743 mean: 0.010854202032131947 median: 0.0069311279336001785 min: -0.07977104197221874 num_samples: 2801 rmse: 0.03063091091570455 std: 0.028643306404315314 trans: max: 0.31743349831681533 mean: 0.10315212270325827 median: 0.09492436960166895 min: 0.010866362203827654 num_samples: 2801 rmse: 0.11594239392852757 std: 0.05293654967685192
V1_2_medium absolute_err_statisticsposyaw-1 rot: max: 0.05099824337986406 mean: 0.0351966478141372 median: 0.03397452874854015 min: 0.010639877323732208 num_samples: 1601 rmse: 0.0355622874251547 std: 0.005086479092349357 scale: max: 0.06281205010775937 mean: -0.0018648311959821304 median: -0.0076808737790491355 min: -0.03810668400238515 num_samples: 1601 rmse: 0.02354809865574968 std: 0.023474142261463177 trans: max: 0.3241622021357461 mean: 0.11928141512177137 median: 0.11506803113577181 min: 0.021260667786906964 num_samples: 1601 rmse: 0.13154014593710545 std: 0.055450464377700716
V2_1_easy absolute_err_statisticsposyaw-1 rot: max: 0.050861129639744936 mean: 0.029273860817354384 median: 0.032493393855938274 min: 0.00201169099673078 num_samples: 2111 rmse: 0.03186331683248744 std: 0.012582210950927541 scale: max: 0.0376493511356657 mean: -0.012438233644940282 median: -0.0035203179145399943 min: -0.10836055609098627 num_samples: 2111 rmse: 0.03155075870306628 std: 0.028995529285270657 trans: max: 0.25246734925911785 mean: 0.1039282332074172 median: 0.10691275967261597 min: 0.004110081483376396 num_samples: 2111 rmse: 0.1216575277463457 std: 0.0632414136443642
V2_2_medium absolute_err_statisticsposyaw-1 rot: max: 0.07364375195357868 mean: 0.02950799920652185 median: 0.024212268247136317 min: 0.004711234656791974 num_samples: 2267 rmse: 0.032905139399719445 std: 0.014561118835545398 scale: max: 0.24773928944726942 mean: 0.008174804129132369 median: -0.0007349538744670925 min: -0.11860240654627108 num_samples: 2267 rmse: 0.06705954358218671 std: 0.06655940927398259 trans: max: 0.5379469710631998 mean: 0.21670773612784036 median: 0.17828577617662228 min: 0.0384400312532186 num_samples: 2267 rmse: 0.2448796761822178 std: 0.11403426199811241
As those results were obtained on my laptop, there should be a little difference with the ones obtained on your device. This may also depend on how you record the outputs, for me I wrote the pose to a .dat (or .txt) file right after the state composition which was the same one showing on the terminal. Please note that I used this evaluation tool in my ijrr paper, while in my iros paper I used another one written by myself using different alignment, so please refer to the results in my ijrr paper.
There is a bug in your code. In updater.cc, you should use fabs() instead of abs() to get the absolute value of a double type number. The result is better after fixing this bug. However, I still can not reproduce the result shown in your paper.
First of all, to validate the results in my paper, here I show the single-run evaluation results on the same sequences that you used, where the values are pretty different with yours. I also ran R-VIO in real time (full speed) and used default settings provided in my repo (however, in practice you should change the values of parameters according to the motion profile).
V1_1_easy absolute_err_statisticsposyaw-1 rot: max: 0.11546159036192433 mean: 0.09985595099193521 median: 0.10092933763372572 min: 0.08007824912689389 num_samples: 2801 rmse: 0.10009345251834682 std: 0.006891174684970752 scale: max: 0.10801994273339743 mean: 0.010854202032131947 median: 0.0069311279336001785 min: -0.07977104197221874 num_samples: 2801 rmse: 0.03063091091570455 std: 0.028643306404315314 trans: max: 0.31743349831681533 mean: 0.10315212270325827 median: 0.09492436960166895 min: 0.010866362203827654 num_samples: 2801 rmse: 0.11594239392852757 std: 0.05293654967685192
V1_2_medium absolute_err_statisticsposyaw-1 rot: max: 0.05099824337986406 mean: 0.0351966478141372 median: 0.03397452874854015 min: 0.010639877323732208 num_samples: 1601 rmse: 0.0355622874251547 std: 0.005086479092349357 scale: max: 0.06281205010775937 mean: -0.0018648311959821304 median: -0.0076808737790491355 min: -0.03810668400238515 num_samples: 1601 rmse: 0.02354809865574968 std: 0.023474142261463177 trans: max: 0.3241622021357461 mean: 0.11928141512177137 median: 0.11506803113577181 min: 0.021260667786906964 num_samples: 1601 rmse: 0.13154014593710545 std: 0.055450464377700716
V2_1_easy absolute_err_statisticsposyaw-1 rot: max: 0.050861129639744936 mean: 0.029273860817354384 median: 0.032493393855938274 min: 0.00201169099673078 num_samples: 2111 rmse: 0.03186331683248744 std: 0.012582210950927541 scale: max: 0.0376493511356657 mean: -0.012438233644940282 median: -0.0035203179145399943 min: -0.10836055609098627 num_samples: 2111 rmse: 0.03155075870306628 std: 0.028995529285270657 trans: max: 0.25246734925911785 mean: 0.1039282332074172 median: 0.10691275967261597 min: 0.004110081483376396 num_samples: 2111 rmse: 0.1216575277463457 std: 0.0632414136443642
V2_2_medium absolute_err_statisticsposyaw-1 rot: max: 0.07364375195357868 mean: 0.02950799920652185 median: 0.024212268247136317 min: 0.004711234656791974 num_samples: 2267 rmse: 0.032905139399719445 std: 0.014561118835545398 scale: max: 0.24773928944726942 mean: 0.008174804129132369 median: -0.0007349538744670925 min: -0.11860240654627108 num_samples: 2267 rmse: 0.06705954358218671 std: 0.06655940927398259 trans: max: 0.5379469710631998 mean: 0.21670773612784036 median: 0.17828577617662228 min: 0.0384400312532186 num_samples: 2267 rmse: 0.2448796761822178 std: 0.11403426199811241
As those results were obtained on my laptop, there should be a little difference with the ones obtained on your device. This may also depend on how you record the outputs, for me I wrote the pose to a .dat (or .txt) file right after the state composition which was the same one showing on the terminal. Please note that I used this evaluation tool in my ijrr paper, while in my iros paper I used another one written by myself using different alignment, so please refer to the results in my ijrr paper.
Thank you very much for the detailed reply. I recorded the pose the same way as you mentioned. I can reproduce results shown in the IJRR paper if I rosplay the dataset by slowing the play rate (I use 0.1). For real-time speed I still got similar results mentioned in my comments above on my laptop.
First of all, to validate the results in my paper, here I show the single-run evaluation results on the same sequences that you used, where the values are pretty different with yours. I also ran R-VIO in real time (full speed) and used default settings provided in my repo (however, in practice you should change the values of parameters according to the motion profile). V1_1_easy absolute_err_statisticsposyaw-1 rot: max: 0.11546159036192433 mean: 0.09985595099193521 median: 0.10092933763372572 min: 0.08007824912689389 num_samples: 2801 rmse: 0.10009345251834682 std: 0.006891174684970752 scale: max: 0.10801994273339743 mean: 0.010854202032131947 median: 0.0069311279336001785 min: -0.07977104197221874 num_samples: 2801 rmse: 0.03063091091570455 std: 0.028643306404315314 trans: max: 0.31743349831681533 mean: 0.10315212270325827 median: 0.09492436960166895 min: 0.010866362203827654 num_samples: 2801 rmse: 0.11594239392852757 std: 0.05293654967685192 V1_2_medium absolute_err_statisticsposyaw-1 rot: max: 0.05099824337986406 mean: 0.0351966478141372 median: 0.03397452874854015 min: 0.010639877323732208 num_samples: 1601 rmse: 0.0355622874251547 std: 0.005086479092349357 scale: max: 0.06281205010775937 mean: -0.0018648311959821304 median: -0.0076808737790491355 min: -0.03810668400238515 num_samples: 1601 rmse: 0.02354809865574968 std: 0.023474142261463177 trans: max: 0.3241622021357461 mean: 0.11928141512177137 median: 0.11506803113577181 min: 0.021260667786906964 num_samples: 1601 rmse: 0.13154014593710545 std: 0.055450464377700716 V2_1_easy absolute_err_statisticsposyaw-1 rot: max: 0.050861129639744936 mean: 0.029273860817354384 median: 0.032493393855938274 min: 0.00201169099673078 num_samples: 2111 rmse: 0.03186331683248744 std: 0.012582210950927541 scale: max: 0.0376493511356657 mean: -0.012438233644940282 median: -0.0035203179145399943 min: -0.10836055609098627 num_samples: 2111 rmse: 0.03155075870306628 std: 0.028995529285270657 trans: max: 0.25246734925911785 mean: 0.1039282332074172 median: 0.10691275967261597 min: 0.004110081483376396 num_samples: 2111 rmse: 0.1216575277463457 std: 0.0632414136443642 V2_2_medium absolute_err_statisticsposyaw-1 rot: max: 0.07364375195357868 mean: 0.02950799920652185 median: 0.024212268247136317 min: 0.004711234656791974 num_samples: 2267 rmse: 0.032905139399719445 std: 0.014561118835545398 scale: max: 0.24773928944726942 mean: 0.008174804129132369 median: -0.0007349538744670925 min: -0.11860240654627108 num_samples: 2267 rmse: 0.06705954358218671 std: 0.06655940927398259 trans: max: 0.5379469710631998 mean: 0.21670773612784036 median: 0.17828577617662228 min: 0.0384400312532186 num_samples: 2267 rmse: 0.2448796761822178 std: 0.11403426199811241 As those results were obtained on my laptop, there should be a little difference with the ones obtained on your device. This may also depend on how you record the outputs, for me I wrote the pose to a .dat (or .txt) file right after the state composition which was the same one showing on the terminal. Please note that I used this evaluation tool in my ijrr paper, while in my iros paper I used another one written by myself using different alignment, so please refer to the results in my ijrr paper.
There is a bug in your code. In updater.cc, you should use fabs() instead of abs() to get the absolute value of a double type number. The result is better after fixing this bug. However, I still can not reproduce the result shown in your paper.
You can try slowing rosbag play rate. Works for my laptop.
First of all, to validate the results in my paper, here I show the single-run evaluation results on the same sequences that you used, where the values are pretty different with yours. I also ran R-VIO in real time (full speed) and used default settings provided in my repo (however, in practice you should change the values of parameters according to the motion profile). V1_1_easy absolute_err_statisticsposyaw-1 rot: max: 0.11546159036192433 mean: 0.09985595099193521 median: 0.10092933763372572 min: 0.08007824912689389 num_samples: 2801 rmse: 0.10009345251834682 std: 0.006891174684970752 scale: max: 0.10801994273339743 mean: 0.010854202032131947 median: 0.0069311279336001785 min: -0.07977104197221874 num_samples: 2801 rmse: 0.03063091091570455 std: 0.028643306404315314 trans: max: 0.31743349831681533 mean: 0.10315212270325827 median: 0.09492436960166895 min: 0.010866362203827654 num_samples: 2801 rmse: 0.11594239392852757 std: 0.05293654967685192 V1_2_medium absolute_err_statisticsposyaw-1 rot: max: 0.05099824337986406 mean: 0.0351966478141372 median: 0.03397452874854015 min: 0.010639877323732208 num_samples: 1601 rmse: 0.0355622874251547 std: 0.005086479092349357 scale: max: 0.06281205010775937 mean: -0.0018648311959821304 median: -0.0076808737790491355 min: -0.03810668400238515 num_samples: 1601 rmse: 0.02354809865574968 std: 0.023474142261463177 trans: max: 0.3241622021357461 mean: 0.11928141512177137 median: 0.11506803113577181 min: 0.021260667786906964 num_samples: 1601 rmse: 0.13154014593710545 std: 0.055450464377700716 V2_1_easy absolute_err_statisticsposyaw-1 rot: max: 0.050861129639744936 mean: 0.029273860817354384 median: 0.032493393855938274 min: 0.00201169099673078 num_samples: 2111 rmse: 0.03186331683248744 std: 0.012582210950927541 scale: max: 0.0376493511356657 mean: -0.012438233644940282 median: -0.0035203179145399943 min: -0.10836055609098627 num_samples: 2111 rmse: 0.03155075870306628 std: 0.028995529285270657 trans: max: 0.25246734925911785 mean: 0.1039282332074172 median: 0.10691275967261597 min: 0.004110081483376396 num_samples: 2111 rmse: 0.1216575277463457 std: 0.0632414136443642 V2_2_medium absolute_err_statisticsposyaw-1 rot: max: 0.07364375195357868 mean: 0.02950799920652185 median: 0.024212268247136317 min: 0.004711234656791974 num_samples: 2267 rmse: 0.032905139399719445 std: 0.014561118835545398 scale: max: 0.24773928944726942 mean: 0.008174804129132369 median: -0.0007349538744670925 min: -0.11860240654627108 num_samples: 2267 rmse: 0.06705954358218671 std: 0.06655940927398259 trans: max: 0.5379469710631998 mean: 0.21670773612784036 median: 0.17828577617662228 min: 0.0384400312532186 num_samples: 2267 rmse: 0.2448796761822178 std: 0.11403426199811241 As those results were obtained on my laptop, there should be a little difference with the ones obtained on your device. This may also depend on how you record the outputs, for me I wrote the pose to a .dat (or .txt) file right after the state composition which was the same one showing on the terminal. Please note that I used this evaluation tool in my ijrr paper, while in my iros paper I used another one written by myself using different alignment, so please refer to the results in my ijrr paper.
There is a bug in your code. In updater.cc, you should use fabs() instead of abs() to get the absolute value of a double type number. The result is better after fixing this bug. However, I still can not reproduce the result shown in your paper.
You can try slowing rosbag play rate. Works for my laptop.
It also works for my laptop. Why is the accuracy of the algorithm affected by the rosbag play rate?
For your case @JiatianWu, I think it may be correlated to the issue of image drop which could happen if your laptop cannot finish the estimation fast enough within the time interval between two images. Although, R-VIO is robust to this situation, it could still affect the accuracy. Also, by lowering down the rosbag rate, it is equivalent to lower down the sensor rate actually, which provides you more time to process data. Thanks @zbqq for finding out that bug, I have corrected it in the latest version.
Hi Huai, for euroc dataset V1_3_difficult and V2_3_difficult I still couldn't reproduce reasonable performance. Also V2_3_difficult dataset generates very divergent results. I run your latest version and play rosbag for 0.1 rate. Below is each performance:
V1_3_difficult absolute_err_statistics_posyaw_-1
rot:
max: 0.044585637706316066
mean: 0.020464485276520863
median: 0.02029049254207592
min: 0.006705279882960416
num_samples: 1998
rmse: 0.02147172772785171
std: 0.006499225645110101
scale:
max: 0.847933014928494
mean: 0.023804562278327508
median: 0.0017115236710822934
min: -0.11748967456307291
num_samples: 1998
rmse: 0.11587927964757022
std: 0.11340789331601664
trans:
max: 3.2488861965095803
mean: 0.24988102945091947
median: 0.14053240656519853
min: 0.027990385998957507
num_samples: 1998
rmse: 0.48471512162966207
std: 0.4153410890545345
V2_3_difficult absolute_err_statistics_posyaw_-1
rot:
max: 0.3216509978474663
mean: 0.2686595886258145
median: 0.2664545952033104
min: 0.23184053062115847
num_samples: 1866
rmse: 0.26884122582348763
std: 0.0098807966117907
scale:
max: 7.269217246867154
mean: 1.395399881693685
median: 0.9734986668338077
min: -0.38319036308834065
num_samples: 1866
rmse: 1.8857054134621625
std: 1.2683627543135108
trans:
max: 16.056465963739324
mean: 5.304419763841001
median: 4.200056370064609
min: 1.7133756080392437
num_samples: 1866
rmse: 6.091170023754736
std: 2.9942416781651513
I guess it messed up with some float precision problem. Do you have any clue?
Please carefully read my previous comment, if possible also my paper, where I have mentioned that the values of parameters (marked as "tunable" in config file) should be changed based on the motion profile. As both _03 sequences have highly dynamic motion, the feature tracking becomes harder which means you may not always have enough tracks traversing 15 images. So please try a shorter sliding window size for those two sequences. As I have open-sourced the code, I welcome all of you to find and report the bugs, but I cannot guarantee that each of you could reproduce the exact same or similar results in my paper, especially when we are using different devices. My original purpose to open source R-VIO is to benefit our research community, please feel free to modify this codebase to improve performance on your own device. Thank you.
Cool. Thank you for sharing the code, great work.
Hi, very impressive work! Did you compare RVIO with ROVIO on Euroc datasets before? BTW, how do you evaluate RVIO with Euroc datasets ground truth? Thank you.