machines-in-motion / mim_estimation

This package contains estimation algorithm C++ implementation, their Python bindings and Python prototypes
BSD 3-Clause "New" or "Revised" License
3 stars 1 forks source link

Test the EKF implementation of solo12 in pybullet simulator and hardware experiments #9

Closed ahmadgazar closed 2 years ago

ahmadgazar commented 3 years ago

Goal:

The aim of this issue is to test the EKF implementation in pybullet. For now since we have not developed a probabilistic model for contact detection, we will start with static squatting task as an initial test.

ahmadgazar commented 3 years ago

A squatting demo has been added by @majidkhadiv here as a starting point.

nrotella commented 3 years ago

Hi, sorry I'm just finding time to test this, but I incorporated the EKF into Majid's squatting task and immediately faced an error due this line in ekf.py:

self.__mu_post['base_orientation'] = box_plus(R_pre, delta_x[6:9])

In the rest of the code, you maintain the base orientation state as a quaternion, but in the update step you set the post-update orientation to a rotation matrix. If this is changed to Quaternion(box_plus(R_pre, delta_x[6:9])) then it continues to run fine.

Maybe you already found this issue, but I wanted to point it out. I pushed my dev branch (hope that's ok) which has this small fix and the EKF incorporated in the squatting task, in case it helps.

nrotella commented 3 years ago

The prediction step seems to work ok (obviously drifts off very quickly when run with the simulated IMU noise), but the update step seems to have some issue. Do you have code available for the finite differencing unit test you wrote to check the Jacobians? I was going to implement this check myself, but I believe you already checked this?

ahmadgazar commented 3 years ago

Hi @nrotella, sorry I have been out of the office the last week. Thank you for starting the testing. Indeed, you are correct about casting the rotation matrix to a Quaternion in the update step. I've pushed the unit test I did back in time, hope it helps. One bug, I just caught now as a fast check in the measurement jacobian is in the bias orientation contribution. The indices seem wrong as they correspond to the bias acceleration and not bias orientation . The column indices should change from 9:12 to 12:15. I will confirm this as well once I get back to the office next week.

ShahramKhorshidi commented 3 years ago

Hi, these are the plots for base position, velocity and orientation, only with prediction step.

position velocity orientation

MaximilienNaveau commented 3 years ago

All seems fine except maybe the x and y velocities, they output of the ekf seems extremely noisy (red line). and do not reflect the fast evolution of the curve... While for the orientation it actually do it.

I am not sure where this oscillation could come from...

nrotella commented 3 years ago

The oscillation is strange, but since it appears in the actual orientation, I think it's just an artifact of the simulation. The EKF output itself looks fine - note that the scale of the noise is very small for the x and y velocities, and in the z direction the magnitude of motion dwarfs whatever noise is there. The magnitude of noise in the orientation is also very small, seems like some weird instability in the simulation itself.

Anyway, thank you for the plots. I also found that the prediction step works fine as you show (despite the weird simulation noise), however the update step quickly diverged. Are you seeing the same when you turn the update on?

ahmadgazar commented 3 years ago

@ShahramKhorshidi Thank you for the plots. The orientation captures the motion quite well, and only accumulates the bias over time as what we would expect without the update step. However, the base velocities in x and y (and as a consequence the base positions) seem to diverge pretty fast. I would expect their divergence to be slower than that since their means in theory should oscillate around zero. A sanity check may be to make sure that nothing is wrong with the integration model to start with is to run the same demo without the accelerometer sensor bias on x and y. So basically inside the IMU simulation that @nrotella did set the bias terms to zero, which means that you are purely integrating the acceleration without noise. I would expect from that test to not see a divergence on x and y base positions and velocities (i.e they shall oscillate around zero as the reference).

nrotella commented 3 years ago

@ShahramKhorshidi Thank you for the plots. The orientation captures the motion quite well, and only accumulates the bias over time as what we would expect without the update step. However, the base velocities in x and y (and as a consequence the base positions) seem to diverge pretty fast. I would expect their divergence to be slower than that since their means in theory should oscillate around zero. A sanity check may be to make sure that nothing is wrong with the integration model to start with is to run the same demo without the accelerometer sensor bias on x and y. So basically inside the IMU simulation that @nrotella did set the bias terms to zero, which means that you are purely integrating the acceleration without noise. I would expect from that test to not see a divergence on x and y base positions and velocities (i.e they shall oscillate around zero as the reference).

I can take a look myself later tonight (assuming I can run the same test easily), but I suspect the position/velocity would drift much faster if the IMU noise/biases were enabled - was the test run with or without these effects? The drift we see in this test (which appears pretty small overall) might have something to do with stability of the simulation itself, since we do see some weird noise in the orientation. Either way, it sounds like the prediction is mostly working but the update step has some issues which I can help debug.

ShahramKhorshidi commented 3 years ago

@ShahramKhorshidi Thank you for the plots. The orientation captures the motion quite well, and only accumulates the bias over time as what we would expect without the update step. However, the base velocities in x and y (and as a consequence the base positions) seem to diverge pretty fast. I would expect their divergence to be slower than that since their means in theory should oscillate around zero. A sanity check may be to make sure that nothing is wrong with the integration model to start with is to run the same demo without the accelerometer sensor bias on x and y. So basically inside the IMU simulation that @nrotella did set the bias terms to zero, which means that you are purely integrating the acceleration without noise. I would expect from that test to not see a divergence on x and y base positions and velocities (i.e they shall oscillate around zero as the reference).

I can take a look myself later tonight (assuming I can run the same test easily), but I suspect the position/velocity would drift much faster if the IMU noise/biases were enabled - was the test run with or without these effects? The drift we see in this test (which appears pretty small overall) might have something to do with stability of the simulation itself, since we do see some weird noise in the orientation. Either way, it sounds like the prediction is mostly working but the update step has some issues which I can help debug.

Those plots were run with all IMU noise terms enabled. When I put all to zero, the orientation does not drift any more, but I get the same behavior for position and velocity as before.

ShahramKhorshidi commented 3 years ago

@ShahramKhorshidi Thank you for the plots. The orientation captures the motion quite well, and only accumulates the bias over time as what we would expect without the update step. However, the base velocities in x and y (and as a consequence the base positions) seem to diverge pretty fast. I would expect their divergence to be slower than that since their means in theory should oscillate around zero. A sanity check may be to make sure that nothing is wrong with the integration model to start with is to run the same demo without the accelerometer sensor bias on x and y. So basically inside the IMU simulation that @nrotella did set the bias terms to zero, which means that you are purely integrating the acceleration without noise. I would expect from that test to not see a divergence on x and y base positions and velocities (i.e they shall oscillate around zero as the reference).

With all noise terms disabled. position velocity orientation

ahmadgazar commented 3 years ago

very good, thanks. So you see that now there is no drift on orientation since there is no bias as expected. However, almost the same behavior remains on the linear counterpart without bias, which seems a bit fishy to me still. Either that there is a bug in the integration of the linear positions/velocities at the x-y directions (I suspect that, but double check just in case). Also, try to confirm that the output of the simulated IMU is translated to the center of the base frame coordinates of the robot, since the EKF implementation assumes that the IMU frame is directly attached to the center of the base frame. I remember that @nrotella applied this transformation to the simulated IMU back in time if I am not mistaken, but double check as well. If both cases are correct, then the only thing is that pybullet is injecting weird energy spikes into the system somehow.

ShahramKhorshidi commented 3 years ago

very good, thanks. So you see that now there is no drift on orientation since there is no bias as expected. However, almost the same behavior remains on the linear counterpart without bias, which seems a bit fishy to me still. Either that there is a bug in the integration of the linear positions/velocities at the x-y directions (I suspect that, but double check just in case). Also, try to confirm that the output of the simulated IMU is translated to the center of the base frame coordinates of the robot, since the EKF implementation assumes that the IMU frame is directly attached to the center of the base frame. I remember that @nrotella applied this transformation to the simulated IMU back in time if I am not mistaken, but double check as well. If both cases are correct, then the only thing is that pybullet is injecting weird energy spikes into the system somehow.

Thanks for your feedback and comments. As you mentioned there is an offset between IMU and Base frame in the wrapper class, which is implemented in simulated IMU values, but I think the returning values are already expressed in the IMU frame. https://github.com/machines-in-motion/bullet_utils/blob/c1d0ca135a35279959c219d37eafc22ff57fa915/src/bullet_utils/wrapper.py#L66 https://github.com/machines-in-motion/bullet_utils/blob/c1d0ca135a35279959c219d37eafc22ff57fa915/src/bullet_utils/wrapper.py#L220 for now I just assumed this offset is zero, in that case without bias terms, all graphs fit together. Do I need to change the EKF formulation and consider this offset in the input of EKF as well?

ShahramKhorshidi commented 3 years ago

Figure_1 Figure_2 Figure_3

ahmadgazar commented 3 years ago

Great. Now, if you add back the biases to the simulated IMU for both linear and angular they shall drift slowly with time as expected following the shape of the motion.

Do I need to change the EKF formulation and consider this offset in the input of EKF as well?

For now, since everything is in simulation we can assume that the sensor simulation is attached to the center of the base frame. However, this shall be done correctly preferably in the simulated IMU in bullet utils as the EKF algorithm should be generic regardless the transformation of the sensor w.r.t. the base. This will be more evident when going to the real hardware experiments to account for such translation.

The next step to confirm that everything is correct is to activate the update step, which should correct the biases if I've done everything correctly in the update step.

ShahramKhorshidi commented 3 years ago

Great. Now, if you add back the biases to the simulated IMU for both linear and angular they shall drift slowly with time as expected following the shape of the motion.

Do I need to change the EKF formulation and consider this offset in the input of EKF as well?

For now, since everything is in simulation we can assume that the sensor simulation is attached to the center of the base frame. However, this shall be done correctly preferably in the simulated IMU in bullet utils as the EKF algorithm should be generic regardless the transformation of the sensor w.r.t. the base. This will be more evident when going to the real hardware experiments to account for such translation.

The next step to confirm that everything is correct is to activate the update step, which should correct the biases if I've done everything correctly in the update step.

Yes, with all bias terms enabled, they diverge very slowly. 1 2 3

ShahramKhorshidi commented 3 years ago

When I activate the update step, the EKF values are still diverging. I've run it with different values for measurement covariance matrix(R), since there is no reference and I guess it should be tuned by trial and error, and for tuning the process covariance matrix(Q), I am using the formula from sola's book and the values for noise terms in the PinBulletWrapper class. Screenshot from 2021-06-16 11-21-03

ShahramKhorshidi commented 3 years ago

Results with update step active. 1(-1) 2(-1) 3(-1)

ShahramKhorshidi commented 3 years ago

one question regarding the measurement model. https://github.com/machines-in-motion/mim_estimation/blob/7345117285916ab70b1f79aa33bda34f4451b980/python/mim_estimation/ekf.py#L204 Why are we using finite difference to calculate the predicted value of base_velocity? I guess the predicted base_velocity is the value of measurement model evaluated at the mean of predicted states, in linearized case is (Hμ) ?

nrotella commented 3 years ago

one question regarding the measurement model.

https://github.com/machines-in-motion/mim_estimation/blob/7345117285916ab70b1f79aa33bda34f4451b980/python/mim_estimation/ekf.py#L204

Why are we using finite difference to calculate the predicted value of base_velocity? I guess the predicted base_velocity is the value of measurement model evaluated at the mean of predicted states, in linearized case is (Hμ) ?

Hi Shahram, the documentation for this implementation should help - specifically, Section 4.2.1.2 on the measurement model of the PRONTO paper explains the measurement you've referenced (eq 23).

Your results so far look good, even with the very small amount of drift you see when the update step is enabled. Actually, in the last plot above, are you sure the orientation is roll-pitch-yaw and not yaw-pitch-roll? I would not be surprised to see the yaw drift since it's unobservable, and yaw drift would cause the base velocity x-y drift you see as well. The base position is also unobservable, so the drift there is okay. Overall I would expect the states to diverge much faster if something was wrong with the model, so it seems like the update step is doing the right thing.

nrotella commented 3 years ago

Actually, I think you might be correct in pointing out that line where predicted_base_v_i is computed - something seems incorrect to me now that I looked back at the PRONTO paper again. Section 4.2.1.2 details the computation of the actual measurement for the humanoid case using finite differencing, however our code uses this for the predicted measurement. The equation for the quadruped case in Section 4.2.2.2 is what we want for the actual measurement, while the predicted measurement for either humanoid or quadruped should always just be the current base velocity estimate v_pre (or whatever we call it). Does that make sense? We seem to be mixing them right now, using the humanoid measurement for predicted and quadruped measurement for actual.

I'm trying to check whether this change affects the performance but am having some build issues since pulling, I'll update if I make progress, but I think maybe line 204 as quoted above should instead be:

predicted_base_v_i = self.__mu_pre['base_velocity']

ShahramKhorshidi commented 3 years ago

I made some changes to the update step and forward kinematics, and now it seems that ekf is working properly. first, the feet positions and velocities from this function are already expressed in the base frame so we don't need to multiply them by R to change the frame expression later. I checked it with another approach. PXL_20210621_141200180_3

ShahramKhorshidi commented 3 years ago

Actually, I think you might be correct in pointing out that line where predicted_base_v_i is computed - something seems incorrect to me now that I looked back at the PRONTO paper again. Section 4.2.1.2 details the computation of the actual measurement for the humanoid case using finite differencing, however our code uses this for the predicted measurement. The equation for the quadruped case in Section 4.2.2.2 is what we want for the actual measurement, while the predicted measurement for either humanoid or quadruped should always just be the current base velocity estimate v_pre (or whatever we call it). Does that make sense? We seem to be mixing them right now, using the humanoid measurement for predicted and quadruped measurement for actual.

I'm trying to check whether this change affects the performance but am having some build issues since pulling, I'll update if I make progress, but I think maybe line 204 as quoted above should instead be:

predicted_base_v_i = self.__mu_pre['base_velocity']

yes, I think two methods are mixed, and I used the current base velocity estimate, for the predicted base velocity and I got following results. 3 3_1 3_2

ShahramKhorshidi commented 3 years ago

one question regarding the measurement model. https://github.com/machines-in-motion/mim_estimation/blob/7345117285916ab70b1f79aa33bda34f4451b980/python/mim_estimation/ekf.py#L204

Why are we using finite difference to calculate the predicted value of base_velocity? I guess the predicted base_velocity is the value of measurement model evaluated at the mean of predicted states, in linearized case is (Hμ) ?

Hi Shahram, the documentation for this implementation should help - specifically, Section 4.2.1.2 on the measurement model of the PRONTO paper explains the measurement you've referenced (eq 23).

Your results so far look good, even with the very small amount of drift you see when the update step is enabled. Actually, in the last plot above, are you sure the orientation is roll-pitch-yaw and not yaw-pitch-roll? I would not be surprised to see the yaw drift since it's unobservable, and yaw drift would cause the base velocity x-y drift you see as well. The base position is also unobservable, so the drift there is okay. Overall I would expect the states to diverge much faster if something was wrong with the model, so it seems like the update step is doing the right thing.

I guess the problem is with the measurement model, since we are assuming that the feet positions are stationary w.r.t world, but actually the movement in the x_direction is considerable comparing to y and z, foot_position_in world

In the measurement covariance matrix R = diag(R_x, R_y, R_z) , I used greater value for R_x, and fixed the problem with roll angle and as expected yaw angle is diverging very slowly.

ShahramKhorshidi commented 3 years ago

For the last step, I need to consider the offset between IMU and Base frame in the EKF formulation, since so far I've put this value to zero. I am still thinking how to do it properly.

nrotella commented 3 years ago

Thanks for all the detailed updates, Shahram - I agree with the modifications for the foot position/velocity frame, and the use of the estimated base velocity for the predicted base velocity in the EKF update.

Your plots look good to me - the drift in x-y position is minimal (on the order of a tenth of a millimeter over 30 seconds is nothing). As you point out, the foot slippage will cause drift in the estimates over time; of course you cannot correct the position drift because it's unobservable, but you can correct velocity drift which limits the integrated error in position. Changing the measurement covariance is the correct thing to do here, especially for walking tasks where the feet will probably slip a lot more.

Regarding the offset between IMU and base frames, I would suggest setting the IMU offset to its true value, transforming the IMU data into base frame, and running the EKF in base frame. Alternatively, you could continue to run the EKF in IMU frame and transform the results but then you will need to do more transformations. The IMU and base frames have the same angular velocity since they're fixed to the same rigid body (up to a possible orientation offset), but the acceleration must be transformed into base frame using the relative angular velocity and angular acceleration as I did here for the simulation: https://github.com/machines-in-motion/bullet_utils/blob/main/src/bullet_utils/wrapper.py#L218

ShahramKhorshidi commented 3 years ago

Thanks for all the detailed updates, Shahram - I agree with the modifications for the foot position/velocity frame, and the use of the estimated base velocity for the predicted base velocity in the EKF update.

Your plots look good to me - the drift in x-y position is minimal (on the order of a tenth of a millimeter over 30 seconds is nothing). As you point out, the foot slippage will cause drift in the estimates over time; of course you cannot correct the position drift because it's unobservable, but you can correct velocity drift which limits the integrated error in position. Changing the measurement covariance is the correct thing to do here, especially for walking tasks where the feet will probably slip a lot more.

Regarding the offset between IMU and base frames, I would suggest setting the IMU offset to its true value, transforming the IMU data into base frame, and running the EKF in base frame. Alternatively, you could continue to run the EKF in IMU frame and transform the results but then you will need to do more transformations. The IMU and base frames have the same angular velocity since they're fixed to the same rigid body (up to a possible orientation offset), but the acceleration must be transformed into base frame using the relative angular velocity and angular acceleration as I did here for the simulation: https://github.com/machines-in-motion/bullet_utils/blob/main/src/bullet_utils/wrapper.py#L218

Thanks for the comments and feedback, Nick. One question regarding this equation, here (imu_linacc) is calculated in the world frame, but I guess the value of offset is expressed in the base frame, based on this documentation.
So in the equation, the part (self.r_base_to_imu) should be replaced by (rot_base_to_world @ self.r_base_to_imu)? and the returning value is in the IMU frame?

ShahramKhorshidi commented 3 years ago

Thanks for all the detailed updates, Shahram - I agree with the modifications for the foot position/velocity frame, and the use of the estimated base velocity for the predicted base velocity in the EKF update.

Your plots look good to me - the drift in x-y position is minimal (on the order of a tenth of a millimeter over 30 seconds is nothing). As you point out, the foot slippage will cause drift in the estimates over time; of course you cannot correct the position drift because it's unobservable, but you can correct velocity drift which limits the integrated error in position. Changing the measurement covariance is the correct thing to do here, especially for walking tasks where the feet will probably slip a lot more.

Regarding the offset between IMU and base frames, I would suggest setting the IMU offset to its true value, transforming the IMU data into base frame, and running the EKF in base frame. Alternatively, you could continue to run the EKF in IMU frame and transform the results but then you will need to do more transformations. The IMU and base frames have the same angular velocity since they're fixed to the same rigid body (up to a possible orientation offset), but the acceleration must be transformed into base frame using the relative angular velocity and angular acceleration as I did here for the simulation: https://github.com/machines-in-motion/bullet_utils/blob/main/src/bullet_utils/wrapper.py#L218

With considering the offset and the modification in (imu_linacc), I transformed the IMU linacc into the base frame here, and I got the same result as before. Figure_1 Figure_2 Figure_3 As we discussed with Max, maybe this is not the best way to implement it because of the finite differencing, and we will modify it later with the other approach you mentioned.

ShahramKhorshidi commented 3 years ago

I also tested the EKF on the data form Vicon. 1 2 3

MaximilienNaveau commented 3 years ago

Why have we 1cm difference in the position ? :thinking:

ShahramKhorshidi commented 3 years ago

Why have we 1cm difference in the position ?

If I have this right, the position is not observable, so there is no guarantee on it. The reason we get accurate result with simulation is that the state estimator dynamic model is identical to the simulator model, and we are using the exact values for the IMU noise terms and tuning the covariance matrices, but in the real hardware we always have modeling error.

nrotella commented 3 years ago

The results on real hardware look good to me, nice work running the experiments with Vicon! Indeed, the absolute position is not observable, so the best you can do is zero it after some initial period of time (the drift you see in the first seconds). After the estimates initially "settle," the position will drift more slowly.

Maybe you can make some new plots with the initial offset subtracted? That will help show the true position estimation accuracy.

On Tue, Jul 13, 2021, 9:32 AM Shahram Khorshidi @.***> wrote:

Why have we 1cm difference in the position ? If I have this right, the position is not observable, so there is no guarantee on it. The reason we get accurate result with simulation is that the state estimator dynamic model is identical to the simulator model, and we are using the exact values for the IMU noise terms and tuning the covariance matrices, but in the real hardware we always have modeling error.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/machines-in-motion/mim_estimation/issues/9#issuecomment-879091162, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACURXNLVHOMLW7SMEZ2JVA3TXQ55XANCNFSM4YMY2P6A .

ShahramKhorshidi commented 3 years ago

The results on real hardware look good to me, nice work running the experiments with Vicon! Indeed, the absolute position is not observable, so the best you can do is zero it after some initial period of time (the drift you see in the first seconds). After the estimates initially "settle," the position will drift more slowly. Maybe you can make some new plots with the initial offset subtracted? That will help show the true position estimation accuracy.

Thanks for the comment. In this graph the offset in the few seconds beginning of estimation is subtracted from the predicted value. F4

MaximilienNaveau commented 3 years ago

Ok that sounds better, though in Z we should be observable no? With the current version of the ekf we do not have a measurement of the base position at all but we could add the estimated base pose relative to the feet position. This would limit the drift, no?

nrotella commented 3 years ago

The EKF does currently indirectly incorporate information about the relative base pose, however the absolute base position is unfortunately always unobservable without adding some external source of information (for example from SLAM or another localization method).

The absolute base velocity is observable through the relative pose measurement, however errors in this estimation will inevitably integrate to base pose error. Therefore I think the results above look very good overall!

The next thing to test is a longer task, especially with contact switching. The major source of error in base pose estimation comes from deciding when the feet are in contact and thus which measurements to use; the effect of the inaccurate IMU noise terms is much smaller and not worth worry about right now.

On Wed, Jul 14, 2021, 6:07 AM Naveau @.***> wrote:

Ok that sounds better, though in Z we should be observable no? With the current version of the ekf we do not have a measurement of the base position at all but we could add the estimated base pose relative to the feet position. This would limit the drift, no?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/machines-in-motion/mim_estimation/issues/9#issuecomment-879766121, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACURXNIS4GLYIGZBHGF553TTXVOV3ANCNFSM4YMY2P6A .

MaximilienNaveau commented 3 years ago

I created the first C++ version of the code with: data expressed in the base frame, and imu+kin used in the ekf. Base_Position Base_Velocity Base_Orientation(roll_pitch-yaw)

The results in simulation looks good. I am starting the implementation of the data expressed in the imu_frame. And then I would do the vicon part.

MaximilienNaveau commented 3 years ago

Hi guys, I fuse the force detection with the ekf and got this after 1 min: Base_Position Base_Velocity Base_Orientation(roll_pitch-yaw)

MaximilienNaveau commented 3 years ago

I am checking the force detection thresholds... now

jviereck commented 3 years ago
MaximilienNaveau commented 3 years ago

Hi I finally succeeded in having the full estimation pipeline in C++ class: RobotStateEstimator. I have the bindings over it. As classically the simple threshold would create too much jitter, I used a double threshold for the stability:

ee_force = m * g / nb_end_effector
threshold_up =  0.8 * ee_force
threshold_down = 0.2 * ee_force

if force.norm() > threshold_up:
  contact = true
if force.norm() < threshold_down:
  contact = false

Base_Position Base_Velocity Base_Orientation(roll_pitch-yaw) Force_and_contact_(FL_ANKLE) Force_and_contact_(FR_ANKLE) Force_and_contact_(HL_ANKLE) Force_and_contact_(HR_ANKLE)

ShahramKhorshidi commented 3 years ago

I closed the loop in squatting simulation by the estimated states from EKF, and noisy imu readings for base angular velocity. Position Velocity Orientation

nrotella commented 3 years ago

Hi Max and Shahram, both of your results look really nice! Having the EKF implemented in C++ with python bindings and running in closed-loop control is a big step.

Now that there is some basic contact checking implemented, can you try a simulated walking (stepping in place) task? I would suggest starting without using the estimates for closed-loop feedback. The contact force thresholds you tuned will likely need to be re-tuned for walking in order to deal with all the weird artifacts caused by impacts; I recall using a pretty high lower threshold in the past, otherwise you will get a false contact when the foot impacts the ground but before the leg is stably loaded (I expect you will see the same if you share some contact force plots for walking). Contact checking is a bit of a paradox because you cannot consider the leg in contact until it has sufficient load, yet you cannot start to load it (use it in your inverse dynamics) until it's considered to be in contact.

In order to tune the thresholds, you can run headless simulations (if possible) with different values, and just compare the estimation root-mean-squared-error versus the ground truth in simulation. You might also want to try a slightly more sophisticated contact checking algorithm which requires some time above/below the threshold before switching (for example, only set contact TRUE when the force has been consistently above the high threshold for 0.1s, or something).

Eventually, it would be really cool to try something like I did in my paper (https://nrotella.github.io/download/Rotella_ICRA2018.pdf), though there are other methods from Camurri et al (I think) which don't use the IMU data like I did. In general the idea would be to remove hard contact switches from your inverse dynamics constraints, and instead use "soft" ones based on likelihood of contact. I wanted to explore this more during my PhD but ran out of time, it could be an interesting topic to pick up if it's still novel :)

I don't have much time to write code or run tests myself this summer, but I'm happy to chat here if it helps (also haven't checked mattermost in a while, sorry).

ShahramKhorshidi commented 3 years ago

I ran the ekf in squatting demo with assuming that measurement is just coming from 1 or 2 or 3 feet in contact. (FL-FR-HL-HR) Overall, since the offsets and deviations are small, I guess the results are acceptable. Just a couple of questions, I know the number of feet in contact affect observability, but should the configuration of feet in contact also affect the performance? For two opposite feet (front and back) in contact the results are fine but with three feet they degrade again !? Position

Velocity

Orientation

ShahramKhorshidi commented 3 years ago

Results for simulated logical stepping in squatting demo. (Changing contact schedule every 200 ms, in each stepping the opposite feet in front and back are in contact) Position Velocity Orientation

nrotella commented 3 years ago

The most recent results for the simulated stepping look nice which is great, however I agree there's something strange happening with the previous experiments (in which you changed which contacts were used for the whole run). Indeed, observability does change with the number and configuration of contacts, but with three feet (forming a plane contact) there should be more observable states than with just two feet (forming a line contact). I don't recall all the different observability cases but they're pretty well explored in the papers we started from, if you want to check there for more details. It might be worth digging into a little more in case it exposes a bug in the implementation, but since the simulated stepping results look good, I would maybe focus on getting the code to run on the real robot instead (and dealing with the weird contact cases there).

On Mon, Aug 2, 2021, 8:42 AM Shahram Khorshidi @.***> wrote:

Results for simulated logical stepping in squatting demo. (In each stepping, the opposite feet in front and back are in contact) [image: Position] https://user-images.githubusercontent.com/75043673/127871213-129d572c-9a72-4b67-908b-a97ef06abd31.png [image: Velocity] https://user-images.githubusercontent.com/75043673/127871221-a527140b-6894-486d-8ddb-a42d2d6632db.png [image: Orientation] https://user-images.githubusercontent.com/75043673/127871230-137f6ad4-1c9f-4e67-a00b-9987e870e08b.png

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/machines-in-motion/mim_estimation/issues/9#issuecomment-891037934, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACURXNJEIUIFRRBBO4HTOYTT22OD3ANCNFSM4YMY2P6A .

ShahramKhorshidi commented 3 years ago

We did some experiments on the robot, with two motions: 1_Rotation of the base with all feet in contact Position Velocity Orientation

2_Trotting in place Position1 Velocity1 Orientation1 Contact

I tried with different values for measurement covariance matrix; the problem is with contact switching, although the contact detection is working fine the z-position is diverging very fast. Is there any way to deal with this issue? And how accurate should the estimated z_position be, if we want to use the estimator values to the controller?

nrotella commented 3 years ago

Nice work running this on the robot! It's good to see that the results with all feet in contact look okay, indeed the problem seems related to contact switching. Did you manage to get good results for a stepping task in simulation? I recall the "simulated logical stepping" looked good, but even stepping in simulation will present some of the same challenges with contact switching, so you might be able to debug these issues more easily there.

Assuming there is no issue in the code itself, it seems like the impacts during stepping are what causes the estimator to diverge so quickly. You can see how clearly the impacts show up as spikes in all the data; I assume plotting the estimated foot contact forces looks similar? Tuning the measurement noise down might help.

In addition to tuning the noise terms, I would try tuning the contact detection force threshold. If you set the threshold to a higher value then the period of contact will become shorter, but hopefully this gets rid of the effects of impact right when the feet touch the ground which might be causing the divergence. You don't necessarily want the contact detection to match the schedule, because the measurements for each foot are less reliable on touchdown and liftoff due to impacts and slip.

ShahramKhorshidi commented 3 years ago

Nice work running this on the robot! It's good to see that the results with all feet in contact look okay, indeed the problem seems related to contact switching. Did you manage to get good results for a stepping task in simulation? I recall the "simulated logical stepping" looked good, but even stepping in simulation will present some of the same challenges with contact switching, so you might be able to debug these issues more easily there.

Assuming there is no issue in the code itself, it seems like the impacts during stepping are what causes the estimator to diverge so quickly. You can see how clearly the impacts show up as spikes in all the data; I assume plotting the estimated foot contact forces looks similar? Tuning the measurement noise down might help.

In addition to tuning the noise terms, I would try tuning the contact detection force threshold. If you set the threshold to a higher value then the period of contact will become shorter, but hopefully this gets rid of the effects of impact right when the feet touch the ground which might be causing the divergence. You don't necessarily want the contact detection to match the schedule, because the measurements for each foot are less reliable on touchdown and liftoff due to impacts and slip.

Thanks for the feedback Nick, I guess the problem with simulated logical stepping in pybullet was the estimated base velocity coming from each foot. There is a sinusoidal in x and y direction, and as long as we pick the values which are out of phase like "FL" and "HR", the result would be fine, for other contact configurations, R(cov_matrix) needs to be tuned.

Base_Velocity_without

ShahramKhorshidi commented 3 years ago

These are the force and contact detection for each foot in "trotting in place". Do you mean, I still need to increase the upper threshold in order to get rid of the bouncing after the contact?

FL FR HL HR

nrotella commented 3 years ago

Nice work running this on the robot! It's good to see that the results with all feet in contact look okay, indeed the problem seems related to contact switching. Did you manage to get good results for a stepping task in simulation? I recall the "simulated logical stepping" looked good, but even stepping in simulation will present some of the same challenges with contact switching, so you might be able to debug these issues more easily there. Assuming there is no issue in the code itself, it seems like the impacts during stepping are what causes the estimator to diverge so quickly. You can see how clearly the impacts show up as spikes in all the data; I assume plotting the estimated foot contact forces looks similar? Tuning the measurement noise down might help. In addition to tuning the noise terms, I would try tuning the contact detection force threshold. If you set the threshold to a higher value then the period of contact will become shorter, but hopefully this gets rid of the effects of impact right when the feet touch the ground which might be causing the divergence. You don't necessarily want the contact detection to match the schedule, because the measurements for each foot are less reliable on touchdown and liftoff due to impacts and slip.

Thanks for the feedback Nick, I guess the problem with simulated logical stepping in pybullet was the estimated base velocity coming from each foot. There is a sinusoidal in x and y direction, and as long as we pick the values which are out of phase like "FL" and "HR", the result would be fine, for other contact configurations, R(cov_matrix) needs to be tuned.

Base_Velocity_without

Sorry I didn't get back to you sooner on these issues. Regarding the simulated stepping, I'm confused what these plots are showing -- are the oscillations a result of numerical instability in the simulation? I didn't follow the part about having to choose feet which are out of phase.