The AMP agent (NN) receives the following observations:
root_h_obs, root_rot_obs, local_root_vel, local_root_ang_vel, dof_obs, dof_vel, flat_local_key_pos
On a real physical robot:
root_h_obs, local_root_vel, local_root_ang_vel, flat_local_key_pos
would probably not be possible to get.
More likely: root rotation, DOF positions, DOF velocities, foot pressure
would be the only ones available. From what I can tell AMP compares these observations to what it sees from the motion capture. Is there a easy way to send a reduced set of observations to the agent that matches more of what is observed in reality? Will the algorithm still work?
After some deeper digging it appears that the AMP observations are only used by the discriminator. So I just changed the observations in humanoid.py and it works great.
The AMP agent (NN) receives the following observations: root_h_obs, root_rot_obs, local_root_vel, local_root_ang_vel, dof_obs, dof_vel, flat_local_key_pos
On a real physical robot: root_h_obs, local_root_vel, local_root_ang_vel, flat_local_key_pos would probably not be possible to get.
More likely: root rotation, DOF positions, DOF velocities, foot pressure would be the only ones available. From what I can tell AMP compares these observations to what it sees from the motion capture. Is there a easy way to send a reduced set of observations to the agent that matches more of what is observed in reality? Will the algorithm still work?