Yi-Shi94 / AMDM

Interactive Character Control with Auto-Regressive Motion Diffusion Models
https://yi-shi94.github.io/amdm_page/
BSD 3-Clause "New" or "Revised" License
97 stars 2 forks source link

High level policy? #1

Closed zengweishuai closed 2 months ago

zengweishuai commented 2 months ago

Excited to see your work! May I ask where the arguments for training high-level policy are? BTW, how does the high-level policy get trained? Is it trained in then simulator?

Yi-Shi94 commented 2 months ago

Thanks for trying out our work. The arguments are currently in args/. For high-level controllers, it starts with two letters indicating tasks [PH=path, TG=targets, etc.], modes [train, test], and base dataset [lafan1, etc.]. For example, given a pretrained lafan1 base model,

You can train our path following policy with:
python run_env.py --arg_file args/PH_train_amdm_lafan1.txt

You can test our path following policy with: python run_env.py --arg_file args/PH_test_amdm_lafan1.txt

As for the second question, I think you meant the dependency of gymnasium and pybullet. In the implementation of MotionVAE they are used to achieve interactive control and visualization. While these tools are integrated, it's important to note that physics simulation isn't involved, as both our work and MotionVAE are kinematic-based motion generation models.

zengweishuai commented 2 months ago

Many thanks!!