machines-in-motion / mim_estimation

This package contains estimation algorithm C++ implementation, their Python bindings and Python prototypes
BSD 3-Clause "New" or "Revised" License
3 stars 1 forks source link

Decide for a Kalman filter approach to use for BLMC robots #6

Closed ahmadgazar closed 3 years ago

ahmadgazar commented 3 years ago

Note: For rendering github equations with MathJax please install either mathjax github plugin for firefox or mathjax github plugin for chrome

This issue is part of this epic that aims to settle for a unified design of a generic KF for BLMC robots.

Goal:

The aim of this issue is to decide for the following approaches (others can also add more references) mentioned before by @nrotella to decide between for estimation on BLMC robots. I am Quoting him here:

"After thinking a little more, I actually think that for point contacts, the Pronto approach may perform better (especially because quadrupeds typically need to deal with a lot more foot slippage) and be easier to implement. Here is a very basic history of legged odometric estimators and their differences, in order:

  1. In Bloesch et al 2012 a point foot filter which integrated IMU and leg odometry was introduced. It explicitly included foot position in the EKF state and add/dropped measurements from the EKF update based on foot contact states.
  2. In Bloesch et al 2013 the same filter was modified to remove the foot position states from the EKF and instead use the contact constraint over a short dt to compute a base velocity measurement instead. This simplified the filter implementation and allowed adding a validation gate on the velocity measurement to reject measurements from slipping feet.
  3. In Rotella et al 2014 we extended Bloesch et al 2012 to explicitly include foot orientation measurements for the humanoid flat feet. In retrospect we may have wanted to start from Bloesch et al 2013 and eliminate the foot position/orientation states from the filter entirely but this wasn't published when we started, and it seemed less relevant for a humanoid which makes pretty stable contacts.
  4. In Fallon et al 2014 essentially the same filter from Bloesch et al 2013 was extended to include LIDAR corrections for absolute base pose drift. Since Atlas has a really high precision gyroscope, they didn't care about including foot orientation measurements for base orientation corrections, so it's the same as a point foot filter.
  5. Skipping ahead some years, Pronto derives from Fallon et al 2014 but includes visual odometry as well, however the kinematic filter at the core is basically the same. They include another previous work from Camurri et al 2017 which adds contact estimation too which is nice. But fundamentally, the quadruped kinematic filter is the same as Bloesch et al 2013.

    Overall I think for a quadruped like Solo and even a humanoid like Bolt, both of which are very dynamic and make lots of frequent, unstable contacts, it makes more sense to go with something based on Bloesch et al 2013. This basically means the foot positions are not included explicitly in the filter, and instead get used to compute a base velocity measurement. For humanoids, as Pronto shows, you can do the same thing and forget including foot orientations or measurements (especially with a good IMU like the one on Atlas; with a cheaper IMU this might not be true). If we're basically starting from scratch anyway, I think for a robot like Solo it makes more sense to implement this slightly different EKF. "

ahmadgazar commented 3 years ago

I agree as well on using Bloesch et al 2013. It's a UKF however, which I believe will work even better. I am also ok with implementing the two versions as well.

MaximilienNaveau commented 3 years ago

If you agree let's start with this one. If we feel bored we can implement 2 of them, but let's start with one ^^

nrotella commented 3 years ago

I agree as well on using Bloesch et al 2013. It's a UKF however, which I believe will work even better. I am also ok with implementing the two versions as well.

Thank you for creating this issue. I agree we should implement one of these methods which has been shown to work well on a quadruped, I feel the results will be better than with the humanoid filter from my paper. Interestingly the papers which follow Bloesch et al 2013 (Fallon et al 2014 and Pronto) go back to using an EKF instead of a UKF. Bloesch et al 2013 describes the choice as follows:

image

however this is a very technical motivation and I'm sure an EKF would work fine as well. I wonder if Fallon et al chose to use an EKF because it's more efficient or just because most people seem more comfortable with EKF than UKF (myself included). I think either way can work, whatever is fastest for prototyping in python makes sense.

I actually wrote C++ EKF and UKF template classes which I used for some later papers, perhaps they're useful after prototyping in python: https://gitlab.is.tue.mpg.de/amd-clmc/humanoid_control/-/tree/master/filtering_tools/template/include/kalman_filter_templates I also remember Manuel Wuthrich was working on some generic filtering library at MPI, I wonder what happened with that?

ahmadgazar commented 3 years ago

I wonder if Fallon et al chose to use an EKF because it's more efficient or just because most people seem more comfortable with EKF than UKF (myself included).

I think speed is one reason, and also tuning parameters of the sigma points can sometimes be a cumbersome may be.

also remember Manuel Wuthrich was working on some generic filtering library at MPI, I wonder what happened with that?

yea, but I think that's for fixed base manipulators though.

ahmadgazar commented 3 years ago

I skimmed through the latest version of Pronto, and I think the notation there is pretty neat. One evident thing I noticed different from Bloesch papers is that the velocities are described in the base frame and not in the world frame, and they don't incorporate IMU information during discretization of the base position.

nrotella commented 3 years ago

I skimmed through the latest version of Pronto, and I think the notation there is pretty neat. One evident thing I noticed different from Bloesch papers is that the velocities are described in the base frame and not in the world frame, and they don't incorporate IMU information during discretization of the base position.

Thank you for sharing the paper, it's a nice one. Bloesch et al actually introduced these "robot centric" coordinates in the 2013 paper, which Fallon based his 2014 paper and then Pronto on. I think the reason for expressing base velocity in base frame is because they measure this velocity per leg in contact, so it complicates the process model a little to simplify the measurement model I guess.

What do you mean by "they don't incorporate IMU information during discretization of the base position" ? This part I'm not sure about.

I know @MaximilienNaveau had mentioned there was a possibility to use Pronto for Solo, if this is still a consideration then maybe this example for Pronto on ANYmal would help. I understand the concern is that Solo has no perception, so unless Pronto allows to use only the kinematic model (which doesn't seem to be possible but I haven't looked much at the Github repo) then it's not worth pursuing. I think to get decent long-term base position estimation on Solo you will eventually need to add at least stereo vision, though. If you care mainly about base velocity then I think a kinematic-only approach is valid for now.

ahmadgazar commented 3 years ago

What do you mean by "they don't incorporate IMU information during discretization of the base position" ?

Sorry, may be I expressed this in a confusing way. I mean the IMU acceleration is used during integrating of the base velocity (first-order information), but not used as well as second-order information for integrating the base position. So if we assume robot-centric coordinates (i.e. quantities expressed in the base frame), and zero-order hold on IMU data, Pronto integrates the base position and velocity using first-order integration as

\begin{align} r^{-}_{k+1} = r^{+}_k + \Delta t v^+k , \tag{1a} \newline
v^{-}_{k+1} = (-[\omega]^\times v^+_{k-1} + (R\
{k-1})^T g + a_k) \Delta t, \tag{1b} \end{align}

while Bloesch and @nrotella (regardless of the different convention) uses second-order info for integrating base position (to incorporate IMU acceleration as well) and first-order integration for base velocity as follows:

\begin{align} r^{-}_{k+1} = r^+_k + \Delta t v^+k + \frac{\Delta t^2}{2}((R\{k-1})^T g + ak), \tag{2a} \newline
v^{-}_{k+1} = (-[\omega]^\times v^+_{k-1} + (R\
{k-1})^T g + a_k) \Delta t, \tag{2b} \end{align}

$a_k$ is the proper acceleration, $R$ is the rotation matrix from base to world, $g$ is the gravity vector, and $^{-/+}$ represents prior/posterior quantities respectively.

In the end, using second-order integration will lead to more accurate base positions, but may be it was not so evident in their case with ANYmal. I just wanted to confirm if my observation is correct, and to decide what is beneficial in the end in practice. We can also discuss this more in detail here once we settle which approach to select from the above.

ahmadgazar commented 3 years ago

I understand the concern is that Solo has no perception, so unless Pronto allows to use only the kinematic model (which doesn't seem to be possible but I haven't looked much at the Github repo) then it's not worth pursuing. I think to get decent long-term base position estimation on Solo you will eventually need to add at least stereo vision, though. If you care mainly about base velocity then I think a kinematic-only approach is valid for now.

I agree. I think for now there is no plan to include perception on Solo, and if that will be the case in the future, it might make sense to import Pronto. My opinion out of the above choices is to proceed with an EKF/UKF versions based on Bloesch 2013 since the rest of the recent papers build on top of it. Do we all agree on this?

MaximilienNaveau commented 3 years ago

@nrotella the template ekf/ukf you did, I think the code is already inside this repo: https://github.com/machines-in-motion/mim_estimation/tree/master/include/mim_estimation/filtering_tools

Let me know if this is the one you thought about.

nrotella commented 3 years ago

@nrotella the template ekf/ukf you did, I think the code is already inside this repo: https://github.com/machines-in-motion/mim_estimation/tree/master/include/mim_estimation/filtering_tools

Let me know if this is the one you thought about.

Yes that's the one, it may be useful here when moving to C++, thanks for finding it. It is general enough to be used for any estimation problem which you can describe through state and measurement classes I believe.

ahmadgazar commented 3 years ago

Ok, great. I think we can as well close this issue, and proceed with deciding for the mathematical notation of Bloesch 2013 in this issue.

nrotella commented 3 years ago

Ok, great. I think we can as well close this issue, and proceed with deciding for the mathematical notation of Bloesch 2013 in this issue.

I wanted to add that I agree with the approach based on Bloesch 2013, however that paper in particular has some unnecessarily complicated notation in my opinion so it may be easier to follow Fallon 2014 or the Pronto paper (just the kinematic portions, ignoring the perception measurements). Either way, the approach is roughly the same, and differs from my paper (Rotella 2014) in that it robustifies the filter by removing explicit foot position states and ignoring foot orientation entirely. We can discuss in another issue how to move forward.