machines-in-motion / mim_estimation

This package contains estimation algorithm C++ implementation, their Python bindings and Python prototypes
BSD 3-Clause "New" or "Revised" License
3 stars 1 forks source link

Add simulated IMU to pybullet #3

Closed nrotella closed 3 years ago

nrotella commented 3 years ago

Overview

Based on the discussion in email, it appears that there is no way to get simulated IMU data natively in pybullet as most sensors simulations are not supported (https://github.com/bulletphysics/bullet3/issues/2759).

Theory

Simulating an IMU is straightforward and was done in SL; we may be able to reuse that C++ code if it still exists, or write our own simulator. The simulated IMU acceleration is computed as:

image

where R rotates the true acceleration of the IMU plus gravity into the IMU frame (assumed to be the same as base frame or at a known rotation offset), b is the accelerometer bias which has random walk dynamics, and w is accelerometer noise. The true acceleration of the IMU is computed from the base acceleration using the known offset vector r as follows:

image

where omega and alpha are the angular velocity and acceleration of the base, respectively. All quantities above are in world frame. If base twist is available in bullet, we should be able to compute accelerations (linear/angular) of the base numerically.

The simulated IMU angular velocity is simpler:

image

Again, the simulated noisy angular velocity has bias and noise terms. The bias terms follow random walk models:

image

Note that I threw together the above LaTeX equations pretty quickly so I was sloppy with notation and maybe made a mistake, but I wanted to illustrate the general idea of simulating the IMU data.

Considerations

Note that it appears others using pybullet desire an IMU "plugin" (https://www.google.com/search?channel=fs&client=ubuntu&q=pybullet+imu) so if there's a way to write one which is robot agnostic it would be beneficial to the community, however I haven't read enough about pybullet to know if this is possible.

nrotella commented 3 years ago

Sorry @MaximilienNaveau I didn't see the option to add this to the estimation project; seems I don't have permissions to modify anything in the right hand column on this page (or maybe I'm missing something).

MaximilienNaveau commented 3 years ago

Thanks a lot Nick for this! This simulation must be added to: https://github.com/machines-in-motion/bullet_utils This package is our interface to pybullet through a simple wrapping class. This class could definitely use pybullet to simulate and imu located wherever we need.

MaximilienNaveau commented 3 years ago

@nrotella you should be able to modify the right column. Let me know if you still can't

nrotella commented 3 years ago

Thanks a lot Nick for this! This simulation must be added to: https://github.com/machines-in-motion/bullet_utils This package is our interface to pybullet through a simple wrapping class. This class could definitely use pybullet to simulate and imu located wherever we need.

Do you mean in this wrapper module? https://github.com/machines-in-motion/bullet_utils/blob/main/src/bullet_utils/wrapper.py#L126-L133

I see some relevant code already in the wrapper class, so it seems like get_imu_accel and get_imu_angvel methods would fit right in here. This wrapper appears robot-independent, such that these get_imu methods would pass through to the derived classes in robot_properties_solo (like this one https://github.com/open-dynamic-robot-initiative/robot_properties_solo/blob/master/src/robot_properties_solo/quadruped12wrapper.py#L22), is that all correct?

The only thing about IMU simulation which would be robot-dependent is the location of the IMU. We can start by assuming it has a fixed pose offset relative to the base, which the derived robot properties class would set from config, does that sound reasonable (without me really knowing how anything works here) ?

If you think it's useful to have a very generic IMU sensor class which allows any number of sensors to be added to any link, then it would take more thought.

jviereck commented 3 years ago

Do you mean in this wrapper module? https://github.com/machines-in-motion/bullet_utils/blob/main/src/bullet_utils/wrapper.py#L126-L133

Yes.

I see some relevant code already in the wrapper class, so it seems like get_imu_accel and get_imu_angvel methods would fit right in here. This wrapper appears robot-independent, such that these get_imu methods would pass through to the derived classes in robot_properties_solo (like this one https://github.com/open-dynamic-robot-initiative/robot_properties_solo/blob/master/src/robot_properties_solo/quadruped12wrapper.py#L22), is that all correct?

Correct.

The only thing about IMU simulation which would be robot-dependent is the location of the IMU. We can start by assuming it has a fixed pose offset relative to the base, which the derived robot properties class would set from config, does that sound reasonable (without me really knowing how anything works here) ?

That sounds good!

What you could do is adding a new imu_base_offset argument to the constructor, which is by default None. None would indicate the robot does not simulate any IMU.

If you think it's useful to have a very generic IMU sensor class which allows any number of sensors to be added to any link, then it would take more thought.

I would start with a specific implementation for now (that is, maximum one IMU with a fixed offset). If we need more IMUs later, I think we can generalize the code then as needed.


In terms of API, we are currently getting these measurements from the IMU we are using accelerometer, gyroscope, attitude, linear_acceleration. So it would be good if the imu API on the wrapper would provide similar/the same readings.

FYI, the code reading the values for solo12 is here: https://github.com/open-dynamic-robot-initiative/blmc_robots/blob/master/src/solo12.cpp#L181

majidkhadiv commented 3 years ago

BTW, if you guys think that simulating IMU in Pybullet is time-consuming and is not worth it, we have some data from experiments on Solo12 or Bolt that can be used for testing the estimator.

MaximilienNaveau commented 3 years ago

I think we should create a simple class that would take the parent joint, the location of the imu in the parent joint and then compute the imu data as Julian describe it. Then in the wrapper you can imu_id = wrapper.add_imu(joint_id, SE3_imu_location_in_parent_frame) and then data = wrapper.get_imu_data(imu_id) then data would contain:

nrotella commented 3 years ago

BTW, if you guys think that simulating IMU in Pybullet is time-consuming and is not worth it, we have some data from experiments on Solo12 or Bolt that can be used for testing the estimator.

I think it's definitely worth spending the time to simulate any sensors on Solo, having an accurate simulator is always a worthy goal in my opinion :) but real data will also be very useful and should be used after validating in simulation.

nrotella commented 3 years ago

I think we should create a simple class that would take the parent joint, the location of the imu in the parent joint and then compute the imu data as Julian describe it. Then in the wrapper you can imu_id = wrapper.add_imu(joint_id, SE3_imu_location_in_parent_frame) and then data = wrapper.get_imu_data(imu_id) then data would contain:

  • accelerometer
  • gyroscope
  • unbias acceleration
  • attitude
  • ... WDYT?

I think this is a good approach, and could try writing it myself starting today if that's okay - seems like something simple enough to get me familiar with bullet and Pinocchio's kinematics.

Regarding the quantities measured by the IMU, I should take a look at the datasheet for your IMU to understand them better, but I think we only really need to simulate the first two quantities. I assume that "unbiased acceleration" let's you calibrate away the accelerometer offsets when stationary? This should be handled by the filter automatically.

The IMU attitude is a tricky one to simulate as the IMU is running its own orientation estimator onboard by integrating the gyro and using the accelerometer (and magnetometer if present) to correct drift. Of course we can just pass through the actual orientation of the IMU from the simulator, but I'm not sure that's useful. Since the base state estimation will provide attitude (orientation) itself, is there any reason you might need the IMU onboard attitude?

MaximilienNaveau commented 3 years ago

The Attitude on-board is computed at a much higher rate and is really well tuned. I think it's a very good source of information regarding roll and pitch (yaw is to handle as it starts at zero whenever the imu is turned on (logical as it is not observable). I would input these in the filter, if we can, to increase the precision and stability (if applicable).

jviereck commented 3 years ago

Regarding the quantities measured by the IMU, I should take a look at the datasheet for your IMU to understand them better,

Here is the IMU we are using. https://github.com/open-dynamic-robot-initiative/open_robot_actuator_hardware/blob/master/mechanics/quadruped_robot_12dof_v1/README.md#inertia-measurement-unit

nrotella commented 3 years ago

The Attitude on-board is computed at a much higher rate and is really well tuned. I think it's a very good source of information regarding roll and pitch (yaw is to handle as it starts at zero whenever the imu is turned on (logical as it is not observable). I would input these in the filter, if we can, to increase the precision and stability (if applicable).

It makes sense to consider using the attitude since it's integrated onboard at a high rate, but typically these approaches do their own attitude integration because the IMU attitude doesn't provide any additional information. The onboard algorithm usually takes advantage of the fact that roll/pitch can be observed from the accelerometer measured gravity vector when the IMU is stationary, but for a dynamic robot this may not work well. In contrast, roll/pitch are made observable in the base estimator through the kinematic foot position measurements which are valid even during dynamic movement.

Certainly it's worth comparing the onboard attitude with base estimation and Vicon to see if it makes sense to incorporate it. Have you done any comparisons of the IMU attitude and Vicon for a dynamic task?

MaximilienNaveau commented 3 years ago

We did not do the test but it would be worth it. Though I am not sure we have a lot of rolling and pitching in our "dynamic motions". @majidkhadiv @jviereck do you guys have logs on this?

jviereck commented 3 years ago

I recorded static walks and jumps. The base was not rotating in either case.

MaximilienNaveau commented 3 years ago

It could still be intersting as the acceleration is not zero... Where are those logs?

jviereck commented 3 years ago

The data (movies + data) for the jumps and static walks is in rsec: https://rsec-motion-group.is.localnet/experiments/16/

nrotella commented 3 years ago

It could still be intersting as the acceleration is not zero... Where are those logs?

Yes exactly - the roll/pitch correction on the onboard attitude estimation should come from periods of low linear acceleration, which for a walking robot I think is not so valid. In particular, contact switches create ugly accelerations which could also corrupt the estimation. But thank you @jviereck for linking the IMU datasheet, I will to take a look there. I can tell you that I haven't ever seen anyone use the IMU's onboard attitude in base estimation, but it's not a bad idea to benchmark it in case it's useful generally.

nrotella commented 3 years ago

I wrote a simple simulated base IMU with noise terms in the bullet wrapper module over the weekend, just haven't had the chance to push and open a PR, sorry. I'll try to do that tomorrow.

ahmadgazar commented 3 years ago

Thanks a lot for that. Gonna check that today.

ahmadgazar commented 3 years ago

@nrotella shall we close this issue since we merged the PR?