google-deepmind / mujoco

Multi-Joint dynamics with Contact. A general purpose physics simulator.
https://mujoco.org
Apache License 2.0
8.07k stars 805 forks source link

Accessing Robot’s Front Grid Terrain Data for Dynamic Simulations in MuJoCo #1843

Closed Josh00-Lu closed 1 month ago

Josh00-Lu commented 2 months ago

Dear MuJoCo Team,

I am seeking guidance on how to capture and utilize grid-based terrain data as observation in MuJoCo, similar to the methods demonstrated in a recent video here and discussed in academic papers like this one and DeepMind's research on Emergence of Locomotion Behaviours in Rich Environments.

image

Specifically, I am interested in understanding how to programmatically obtain terrain sampling points for the area in front of a robot, with the sampling area dynamically rotating according to the robot's orientation. This involves capturing height data of the terrain which adjusts as the robot moves and turns within the environment.

Could you advise if there are specific sensors or techniques within MuJoCo that facilitate this type of data capture? If not directly available, could you provide guidance or suggestions on how this can be implemented using MuJoCo’s existing tools?

I have attempted to reach out to authors of relevant publications but have not received feedback, so I'm turning to you for expert advice

Thank you for your support.

Best Regards

Balint-H commented 2 months ago

Hello,

A robust way to do this would be with ray cast calls. You can use the rangefinder sensor component if you'd like the sensing to be attached to the robot. See the end of this thread: https://github.com/google-deepmind/mujoco/discussions/688

If instead you want it to come from some idealized, non-attached point in the body (e.g. independent of frontal or sagittal tilt of the root) you can either still use a rangefinder array, but attach it to a site in the world and move it around according to your criteria, or manually call mj_ray casts which can get you the distance. To reproduce something like in your example, you can define a source position for your rays positioned in front of your robot and orientated according to its facing, then perform ray casts/query rangefinder readings. Then subtract the length of the ray from the height of the source for each queried point.

You are probably not using the C# bindings for MuJoCo, but here's an example that I used to get the raycast query points in C#:


  private Vector3 offset;  // Position offset of the grid centre compared to the biped root (in local coordinates to the biped)

  private int height;  // Number of rows for the sensing grid

  private int width;  // Number of columns for the sensing grid

  private float sizeScale;  // Scales the grid size in world space

  private Transform sourceKinematics;  // The transform of the biped's root

  Matrix4x4 UprightMatrix {  // The 4 by 4 transformation matrix of the centre of the ray cast grid
    get
    {
      var oldEul = sourceKinematics.Rotation.eulerAngles;
      var newEul = new Vector3(0f, oldEul.y, 0f);  //Only keep rotation in transverse plane

      return Matrix4x4.TRS(sourceKinematics.Position, Quaternion.Euler(newEul), Vector3.one);  // Transform matrix with the constrained rotation
    }
  }

  private IEnumerable<Vector3> RayOrigins {
    get
    {
      var mat = UprightMatrix;
      for (int h = 0; h < height; h++) {
        for (int w = 0; w < width; w++) {
          yield return mat.MultiplyPoint3x4(offset + new Vector3(  // Convert point from local space to global space
              (w - (width - 1) / 2f) * sizeScale,
              0f,
              (h - (height - 1) / 2f) * sizeScale)); 
        }
      }
    }
  }

Then you can perform ray casts from these origins.

These approaches will work for all collision geometries. However, if instead you work with height fields only, you can instead get away with querying the hfield data directly (potentially interpolating as needed) at the relevant points (stored in the mjModel's hfield_data: field https://mujoco.readthedocs.io/en/latest/APIreference/APItypes.html#mjmodel).

Josh00-Lu commented 2 months ago

Hello,

A robust way to do this would be with ray cast calls. You can use the rangefinder sensor component if you'd like the sensing to be attached to the robot. See the end of this thread: #688

If instead you want it to come from some idealized, non-attached point in the body (e.g. independent of frontal or sagittal tilt of the root) you can either still use a rangefinder array, but attach it to a site in the world and move it around according to your criteria, or manually call mj_ray casts which can get you the distance. To reproduce something like in your example, you can define a source position for your rays positioned in front of your robot and orientated according to its facing, then perform ray casts/query rangefinder readings. Then subtract the length of the ray from the height of the source for each queried point.

You are probably not using the C# bindings for MuJoCo, but here's an example that I used to get the raycast query points in C#:

  private Vector3 offset;  // Position offset of the grid centre compared to the biped root (in local coordinates to the biped)

  private int height;  // Number of rows for the sensing grid

  private int width;  // Number of columns for the sensing grid

  private float sizeScale;  // Scales the grid size in world space

  private Transform sourceKinematics;  // The transform of the biped's root

  Matrix4x4 UprightMatrix {  // The 4 by 4 transformation matrix of the centre of the ray cast grid
    get
    {
      var oldEul = sourceKinematics.Rotation.eulerAngles;
      var newEul = new Vector3(0f, oldEul.y, 0f);  //Only keep rotation in transverse plane

      return Matrix4x4.TRS(sourceKinematics.Position, Quaternion.Euler(newEul), Vector3.one);  // Transform matrix with the constrained rotation
    }
  }

  private IEnumerable<Vector3> RayOrigins {
    get
    {
      var mat = UprightMatrix;
      for (int h = 0; h < height; h++) {
        for (int w = 0; w < width; w++) {
          yield return mat.MultiplyPoint3x4(offset + new Vector3(  // Convert point from local space to global space
              (w - (width - 1) / 2f) * sizeScale,
              0f,
              (h - (height - 1) / 2f) * sizeScale)); 
        }
      }
    }
  }

Then you can perform ray casts from these origins.

These approaches will work for all collision geometries. However, if instead you work with height fields only, you can instead get away with querying the hfield data directly (potentially interpolating as needed) at the relevant points (stored in the mjModel's hfield_data: field https://mujoco.readthedocs.io/en/latest/APIreference/APItypes.html#mjmodel).

Thank you for your suggestion! Are there any methods to exclude certain geoms and let the ray pass through them, instead of setting their alpha(s)=0? Since I am also interested in the height of those dots right below the robot, while rangefinder above the robot will cast rays to the surface of the robot, rather than passing through it towards the ground.

Balint-H commented 2 months ago

With raycasts (manual, not with rangefinder) you can specify the geom groups (which you can set in the xml for each geom) that are valid intersections.

https://mujoco.readthedocs.io/en/latest/APIreference/APIfunctions.html#mj-ray

The hfield array method also doesnt have this issue.