Closed Gatsby23 closed 1 year ago
Hi @Gatsby23 -- thanks for your comment.
So since we declared line 3 as static, it only gets called on the first iteration in order to initialize the value for line 4 the first time a scan is received. So the first value of median_lpf
will equal to the median distance. However, in subsequent iterations, line 5 calculates median_prev
instead, creating a simple low pass filter. If you print out the values of median_curr
(median distance) and median_lpf
, you'll see that they're different values after the first iteration. Let me know if that helps.
Hi @Gatsby23 -- thanks for your comment.
So since we declared line 3 as static, it only gets called on the first iteration in order to initialize the value for line 4 the first time a scan is received. So the first value of
median_lpf
will equal to the median distance. However, in subsequent iterations, line 5 calculatesmedian_prev
instead, creating a simple low pass filter. If you print out the values ofmedian_curr
(median distance) andmedian_lpf
, you'll see that they're different values after the first iteration. Let me know if that helps.
OK, It's my mis-understood. Thank you for your comment! I really like this work: simple and effective.
Thank you for your wonderful work, but it seems there is a little bug in the adaptive threshold estimation: From the paper, we know "Thus, we choose to scale the translational threshold for new keyframes according to the “spaciousness” in the instantaneous point cloud scan, defined as $mk = α m{k−1} + β M_k$, where M_k is the median Euclidean point distance from the origin to each point in the preprocessed point cloud, α = 0.95, β = 0.05,". However, the code in odom.cc for calculation "spaciousness" is:
It seems it always uses the median distance as "spaciousness".