ZhengyiLuo / PHC

Official Implementation of the ICCV 2023 paper: Perpetual Humanoid Control for Real-time Simulated Avatars
https://zhengyiluo.github.io/PHC/
Other
428 stars 38 forks source link

About env.models #48

Closed yw0208 closed 4 months ago

yw0208 commented 4 months ago

Hi, great work for PHC+! But I have some doubts about which model I should use. I used the following command for the video/language model. python phc/run_hydra.py learning=im_mcp_big learning.params.network.ending_act=False exp_name=phc_comp_kp_2 env.obs_v=7 env=env_im_getup_mcp robot=smpl_humanoid robot.real_weight_porpotion_boxes=False env.motion_file=sample_data/amass_isaac_standing_upright_slim.pkl env.models=['output/HumanoidIm/phc_kp_2/Humanoid.pth'] env.num_prim=3 env.num_envs=1 headless=False epoch=-1 test=True But it seems that it loads the checkpoints in the ./phc_comp_kp_2/. But env.models=['output/HumanoidIm/phc_kp_2/Humanoid.pth'] is set in the command. So I want to confirm which model should I use? Is the command correct?