Open Minyoung1005 opened 2 weeks ago
play_motion.py
with the --extra-args=...
flag and manually providing some structs. Pushed a fix to the config structure so play_motion
will auto-load the default settings. The user can override them, for example using --extra-args="+scenes=samp"
.README
under data
. It can receive .npy
files, .yaml
that points to multiple .npy
files, or a packaged (pickled) MotionLib
file in .pt
format.num_envs=256 algo.config.batch_size=1024
. You want to try and find the optimal working point, most envs and batch size but without it crashing due to out-of-memory errors.SMPL+H
. For the SMPL-X body (with fingers), I've been working with SMPL-X G
, although I believe the N
one should also work.
Hi, thanks for releasing the code for Maskedmimic. Your work is awesome!
1
I am trying to reproduce the results and was replaying the .npy motion files provided in the repository, and just found an error as follows when I execute
python phys_anim/scripts/play_motion.py phys_anim/data/motions/smpl_humanoid_walk.npy isaacgym smpl
omegaconf.errors.InterpolationKeyError: Interpolation key 'scene_lib' not found
After adding '+scene_lib=null' to the command string in play_motion.py, it works but I'm not sure if setting scene_lib as null is appropriate.
2
Also, when I run
python phys_anim/train_agent.py +exp=full_body_tracker +backbone=isaacgym +robot=smpl motion_file=data/smpl/SMPL_NEUTRAL.pkl
, I get the following error:How should I fix this error? I'm assuming that motion_file should be either .npy or .yaml, not .pkl, but I have no idea what file I should use for the first training stage (Train full body tracker: Run PYTHON_PATH phys_anim/train_agent.py +exp=full_body_tracker +backbone=isaacgym +robot=smpl motion_file=)
3
train_agent.py works when I pass an .npy file already existing in phys_anim/data. However, I get CUDA out of memory even though I've reduced the batch size a lot (16384 -> 128) to test the code in my local machine with a 3080Ti GPU with 16GB VRAM. Is there a way I can test the code on a local machine? I've tried adding
algo.config.batch_size=128
argument when I was reducing the batch size. Should I change the number of the environments? If so, how can I do that?4
When I go into the AMASS website, there are so many subdatasets available in various versions. What should I download?
Would appreciate your feedback, thanks!