Closed SidPad closed 4 months ago
For different body shape, namely the robot=smpl_humanoid_shape
I trained controllers conditioned on the body shape beta parameters.
The phc_shape_mcp_iccv
models are trained with rotation-and-position tracking models, while env.obs_v=7
models only has position tracking. To use phc_shape_mcp_iccv
, please use env.obs_v=6
. Notice that obs_v=6 would require additional processing in HumanoidImMCPDemo
which is more involved.
I didn't train a position-only tracking model with different body shape mainly due to time constraint; it's definitely possible.
It is possible to simulate different humanoids with different shape at once, please follow the original command for expname= phc_shape_mcp_iccv
as an example.
Thank you for your reply,
It is possible to simulate different humanoids with different shape at once, please follow the original command for expname= phc_shape_mcp_iccv as an example.
Yes, I am aware that it's possible to simulate multiple humanoids of different shapes at once, but I would like to control them using language instruction (MDM).
If I am not wrong the command that is given in the example is only for state recovery.
python phc/run_hydra.py learning=im_mcp exp_name=phc_shape_mcp_iccv test=True env=env_im_getup_mcp robot=smpl_humanoid_shape robot.freeze_hand=True robot.box_body=False env.z_activation=relu env.motion_file=sample_data/amass_isaac_standing_upright_slim.pkl env.models=['output/HumanoidIm/phc_shape_pnn_iccv/Humanoid.pth'] env.num_envs=1 headless=False epoch=-1
The phc_shape_mcp_iccv models are trained with rotation-and-position tracking models, while env.obs_v=7 models only has position tracking. To use phc_shape_mcp_iccv, please use env.obs_v=6. Notice that obs_v=6 would require additional processing in HumanoidImMCPDemo which is more involved.
Thanks for this suggestion, As you suggested, I changed env.obs_v from 7 to 6 in the commandline. However, it seems that the code implementation is not correct according to the comments given in the code.
For reference I have included the code below from humanoid_im_mcp_demo.py
:
if self.obs_v == 6:
# raise NotImplementedError
# This part is not as good. use obs_v == 7 instead.
# ref_rb_pos = self.j3d[((self.progress_buf[env_ids] + 1) / 2).long() % self.j3d.shape[0]]
# ref_body_vel = self.j3d_vel[((self.progress_buf[env_ids] + 1) / 2).long() % self.j3d_vel.shape[0]]
pose_mat = self.pose_mat.clone().cuda()
trans = self.trans.clone()
trans = np.array(trans).squeeze()
# pose_mat = self.rot_mat_ref[((self.progress_buf[env_ids] + 1) / 2).long() % self.rot_mat_ref.shape[0]] # debugging
pose_res = requests.get(f'http://{SERVER}:8080/get_pose')
json_data = pose_res.json()
# pose_mat = torch.tensor(json_data["pose_mat"])[None,].float()
# trans = torch.tensor(json_data["trans"]).to(self.device).float()
# trans = np.array(json_data["trans"]).squeeze()
s_dt = json_data['dt']
self.root_pos_acc.append(trans)
filtered_trans = filters.gaussian_filter1d(self.root_pos_acc, 3, axis=0, mode="mirror")
trans = torch.tensor(filtered_trans[-1]).float().cuda()
self.to_isaac_mat = self.to_isaac_mat.cuda()
new_root = self.to_isaac_mat.matmul(pose_mat[:, 0])
pose_mat[:, 0] = new_root
trans = trans.matmul(self.to_isaac_mat.T)
_, global_rotation = humanoid_kin.forward_kinematics_batch(pose_mat[:, smpl_2_mujoco], self.zero_trans, self.local_translation_batch, self.parent_indices)
ref_rb_rot = ptr.matrix_to_quaternion_ijkr(global_rotation.matmul(self.to_global))
I commented out raise NotImplementedError
, but I had to change the initialization of pose_mat
and trans
variables. Even so definition for variables such as humanoid_kin
forward_kinematics_batch
do not seem to exist.
Sorry about this, but am I missing something basic? Thanks in advance for your help.
No worries!
If I am not wrong the command that is given in the example is only for state recovery.
I am not sure what state recovery is referring to here?
As you suggested, I changed env.obs_v from 7 to 6 in the commandline. However, it seems that the code implementation is not correct according to the comments given in the code.
Changing from obs_v=7 to obs_v=6 would be require a number of changes. As a start, MDM needs to be run to produce SMPL parameters (basically adding the Smplify code). I developed obs_v=6 demo but don't think it's in a good state so I opted out in releasing them (thus the raise NotImplementedError
. If you are interested, I can push the code for forward_kinematics_batch
but you will need to make sure that the correct paramters are passed from MDM.
I am not sure what state recovery is referring to here? Meaning the reference motion is simply standing and if perturbations were introduced to the humanoid by pressing the letter 'j' on the keyboard, it would recover, but nothing else.
If I'm not wrong, I believe I cannot generate reference motion using MDM and make the humanoids track that?
If you are interested, I can push the code for forward_kinematics_batch but you will need to make sure that the correct paramters are passed from MDM.
Yes, please do, I would like to give it a shot. Thanks a lot!
If I'm not wrong, I believe I cannot generate reference motion using MDM and make the humanoids track that?
Yes you can! MDM can create SMPL parameters & motion. You just need some data processing to make it work. The demo code is developed to make it real time so there were some shortcuts taken.
Hi,
Thank you for providing your code for this work.
I would like to generate motions using MDM for different body shapes.
According to the instructions provided, I can implement for the default humanoid shape as described below: In Terminal A: _python language_to_pose_server.py --model_path save/humanml_trans_enc512/model000200000.pt In Terminal B: _python phc/run_hydra.py learning=im_mcp exp_name=phc_kp_mcp_iccv env=env_im_getup_mcp env.task=HumanoidImMCPDemo robot=smpl_humanoid robot.freeze_hand=True robot.box_body=False env.z_activation=relu env.motion_file=sample_data/amass_isaac_standing_upright_slim.pkl env.models=['output/HumanoidIm/phc_kp_pnn_iccv/Humanoid.pth'] env.num_envs=1 env.obs_v=7 headless=False epoch=-1 test=True no_virtualdisplay=True
To try this for another humanoid shape, I tried changing the command in Terminal B like this: python phc/run_hydra.py learning=im_mcp exp_name=phc_shape_mcp_iccv env=env_im_getup_mcp env.task=HumanoidImMCPDemo robot=smpl_humanoid_shape robot.freeze_hand=True robot.box_body=False env.z_activation=relu env.motion_file=sample_data/amass_isaac_standing_upright_slim.pkl env.models=['output/HumanoidIm/phc_shape_pnn_iccv/Humanoid.pth'] env.num_envs=1 env.obs_v=7 headless=False epoch=-1 test=True no_virtual_display=True
However I am getting a mismatch in NN model sizes for actor, critic, and composer.
Thank you for your help in advance.