google-deepmind / mujoco

Multi-Joint dynamics with Contact. A general purpose physics simulator.
https://mujoco.org
Apache License 2.0
7.73k stars 767 forks source link

Streaming robot joint encoder data into simulation. Mocap. #1395

Closed v-r-a closed 6 months ago

v-r-a commented 6 months ago

Hi,

I'm a (student), and I'm trying to use MuJoCo for research. I'm looking for some help with streaming robot joint encoder data into simulation (for visualisation). I am using MuJoCo in C/C++.

Here is a brief explanation of my setup: 1) I have a simulation running in MuJoCo 2) The robot motion from MuJoCo is sent to a physical robot (real-time) 3) I have some feedback available from the robot---joint encoders and the absolute orientation of end-effectors. I wish to construct a visualisation of the actual robot (say, besides the simulated robot) in MuJoCo.

I am aware of mocap bodies. However, since they are designed to be the children of the world body, one needs to update their global position and orientation. I wish to exploit the model tree structure. My current solution: 1) Create separate mjData for the physical robot and update its kinematics with the feedback I have. 2) Compute the global position and orientation of each body (using mj_kinematics()/ mj_forward()) and use it to update the visualisation of mocap bodies in the simulation.

The main issue with this method is that if I update the robot model, then I have to update the mocap bodies manually in the XML.

Is there any other better way?

yuvaltassa commented 6 months ago

Hi @v-r-a,

Your question is confusing, but I am not sure what it is you are misunderstanding. Here are some tips that may be relevant. If this doesn't solve your issue, please clarify.

  1. mocap bodies are children of the world, but can have articulated children of their own. The root of a robot can be a mocap body.
  2. You don't need to write the mocap values in the XML, you can write into mjData.mocap_{pos,quat}. This is the point of mocap body poses, they are user settable at runtime.
  3. However, mocap bodies are meant to be set by the user during a running simulation, but you are not stepping the physics. Why do you need mocap bodies at all? You can just write your pos/quat into the relevant freejoint qpos indices, call mj_forward and render.

If any of the above clarified something that you think should be documented better, please let us know.

v-r-a commented 6 months ago

Let me try to explain with reference to the following image: 1) Robot 1 is being simulated (physics is being stepped). The motion of this robot is sent to a physical robot. 2) Robot 2 is a dummy (all links are mocap bodies), representing the pose of the physical robot based on the encoder data and IMU on the torso. (I do not want to do physics stepping with this robot at the moment).

The way I implemented (2) is that I made each robot link as a mocap body (by manually breaking the model tree), which is a child of the world body. Now, I do not have the posand quatof these links. So I computed them using the encoder data + forward kinematics. A separate instance of mjData was created for these calculations.

What is a good way to implement (2)?

WhatsApp Image 2024-02-03 at 14 40 06

yuvaltassa commented 6 months ago

I addressed this above.

v-r-a commented 6 months ago

If you want to do this you can write into mocap_{pos, quat}

I am writing into mocap_{pos, quat}, but I do not have these values readily available, I have joint coordinates readily available from the experiment. I was wondering if I could use them directly (like writing them into d->qpos).

Why do you want to do this? I don't understand why you think this is a sensible thing to do. Perhaps if you explained that I could offer more help.

I want to visually compare the motion of the simulated robot and the actual robot in real time. Would be good for a demonstration, more intuitive than seeing joint error plots.

I don't know if mocap bodies (with collision turned off) are appropriate for this application. I first thought of making an XML with two humanoids, like 22_humanoids. But physics runs for all the robots as I understand.

yuvaltassa commented 6 months ago

Okay, but you already have a non-simulating model that you write your encoder data into, right? Why aren't you visualizing this? You can just read mjData.xpos from both models if you want to plot position errors.

I still don't understand the purpose of the 3rd model with the mocap bodies...

v-r-a commented 6 months ago

Pardon me for the confusion, I appreciate your patience! Let me try once last time afresh

1) I want to see only two robots in the scene. The model is available here:. biped.zip 2) Robot 1 (torso link as root) is being simulated. 3) Robot 2 (not really a robot, a bunch of mocap bodies for visualisation) pose has to follow a physical robot. 4) My question/ what I wish is that "Robot 2" pose was directly updatable using floating base pose + joint coordinates, instead of evey body being a floating mocap body. Since mocap bodies cannot be arranged in tree type structure, I am following a roundabout method to update their pose (pos and quat) at the moment. Also, if the robot CAD changes, I have to do some manual work to reconnect the mocap bodies in the XML. 5) Is there a better way to model the Robot 2 in XML file, rather than visualising it through a bunch of mocap bodies?

yuvaltassa commented 6 months ago

We are somehow talking across purposes.

I will also try again. Why do you think you need to break up the kinematics of "Robot 2"? Why is Robot 2 not just a copy of Robot 1, where instead of simulating you write the values of your base pose and joint encoders into robot_2_data.qpos, call mj_forward and visualize?

yuvaltassa commented 6 months ago

Okay I think I am guessing the answer, you have put both your robots in the same model, is that it? So you have to call mj_step, but you don't want Robot 2 to move? Is that the situation?

v-r-a commented 6 months ago

I will also try again. Why do you think you need to break up the kinematics of "Robot 2"? Why is Robot 2 not just a copy of Robot 1, where instead of simulating you write the values of your base pose and joint encoders into robot_2_data.qpos, call mj_forward and visualize?

I exactly want to do this. but how? Can you show a pseudocode with relevant data structures and function calls?

Can we have robot_1_data and robot_2_data in the same scene and only do mj_step(m,robot_1_data)? The scn structure seems to be bound with mjData, and mjr_render()is bound to scn. I probably have some misunderstanding here.

v-r-a commented 6 months ago

Okay I think I am guessing the answer, you have put both your robots in the same model, is that it? So you have to call mj_step, but you don't want Robot 2 to move? Is that the situation?

Yes

yuvaltassa commented 6 months ago

This will work:

  1. Robot 2 should be a copy of Robot 1, except that you should
    • Disable all collisions between Robot 2 and anything (contype=conaffinity=0).
    • Replace the free joint with a mocap body.
    • Give all the the other joints very high armature, say 1e6-1e9, it doesn't matter.
  2. Now write the base pose into the mocap body and the joint values into qpos. The high armature will make the joints freeze.

That said, I still don't think I would do this. Just have two separate mjData's, one of which is stepped and one of which is not. I don't see the point of putting them in the same scene, it just messes up the joint indexing. Can you explain why you want them in the same scene?

v-r-a commented 6 months ago

It is okay if they are not in the same scene. Can I then have two scenes in the same window? I guess I can do that by partitioning the viewport and rendering separately.

v-r-a commented 6 months ago

Closed this topic by creating two different simulations (mjData and scenes) in the same window:

https://github.com/google-deepmind/mujoco/assets/93522619/4ad4e4de-5b7b-4d08-a3d4-10aa6b3bd42b