Closed mmdrahmani closed 1 year ago
Hi there, that is a good point as a sample run demo is missing. you can use the mdm_motion2smpl.py as a script; i.e. in the cloned human_body_prior folder under tutorials run python mdm_motion2smpl -h which will show the arguments of the call to script, i.e. for the current verion
usage: mdm_motion2smpl.py [-h] [--input INPUT] [--pattern PATTERN] [--batch_size BATCH_SIZE] [--model_type MODEL_TYPE] [--device DEVICE] [--gender GENDER] [--save_render SAVE_RENDER] [--verbosity VERBOSITY]
optional arguments: -h, --help show this help message and exit --input INPUT skeleton movie filename that is to be converted into SMPL --pattern PATTERN filename pattern for skeleton movies be converted into SMPL --batch_size BATCH_SIZE batch size for inverse kinematics --model_type MODEL_TYPE model_type; e.g. smplx --device DEVICE computation device --gender GENDER gender; e.g. neutral --save_render SAVE_RENDER render IK results --verbosity VERBOSITY 0: silent, 1: text, 2: display
an example run would be:
python -m mdm_motion2smpl --pattern "~/opt/data/datasets/mdm_synthesis/prompts/*/*/*.mp4" --save_render true
which would dump the {original name}_smplx.mp4 files at the source directory.
I hope this was helpful for you.
Thank you Nima This example code was very helpful. I finally made it! See the video. I tried using VPoser on all platforms (mac, windows, linux) but each had their own issues. I will briefly summarize here:
Overcoming these challenges, I was quite surprised/happy by the results using smplx.npz file, so I just wanted to share the moment. Thank you again. Mohammad
Another question: Should the process of converting MDM xyz pose offsets to SMPLX npz animation be this complicated? Does it have to be through third party libraries (all the other dependencies)? I mean mathematically, there should be a way to transform these two into each other. Maybe I am missing something critical. It also seems that the code mdm_motion2smpl.py uses the mp4 filename to extract sample and repeat ids, and it is not actually doing anything on the videos. Is that correct? Thanks
I am happy eventually you got it to work. mdm dumps the results (repretitions and samples of a prompt) into one npy file and following original mdm code I use the file name of the mp4 file to figure out the index of the motion to be bodified. Thank you for lisitng the issues you faced during installation, they will help to build a more robust installation.
@mmdrahmani @nghorbani I have the same issue that @mmdrahmani had, but I am now getting this error for missing model.npz from support_data/dowloads/models/smplx/neutral/model.npz
, and missing check-points from support_data/dowloads/vposer_v2_05/snapshots
assets glove sample
body_models human_body_prior save
body_visualizer human_body_prior1 smplx
body_visualizer1 HumanML3D smplx1
data_loaders LICENSE train
dataset Miniconda3-4.5.4-Linux-x86_64.sh utils
diffusion model visualize
environment.yml prepare
eval README.md
found 1 mp4 files
2022-11-28 14:32:38.435 | INFO | __main__:convert_mdm_mp4_to_amass_npz:119 - found support_dir: ./human_body_prior/src/support_data/dowloads
bm_fname = ./human_body_prior/src/support_data/dowloads/models/smplx/neutral/model.npz
print(expr_dir)=./human_body_prior/src/support_data/dowloads/vposer_v2_05
****model_snapshots_dir =./human_body_prior/src/support_data/dowloads/vposer_v2_05/snapshots
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/content/motion-diffusion-model/human_body_prior/tutorials/mdm_motion2smpl.py", line 268, in <module>
verbosity=params.verbosity)
File "/content/motion-diffusion-model/human_body_prior/tutorials/mdm_motion2smpl.py", line 177, in convert_mdm_mp4_to_amass_npz
optimizer_args=optimizer_args).to(comp_device)
File "/content/motion-diffusion-model/human_body_prior/models/ik_engine.py", line 221, in __init__
disable_grad=True)
File "/content/motion-diffusion-model/human_body_prior/tools/model_loader.py", line 69, in load_model
model_cfg, trained_weights_fname = exprdir2model(expr_dir, model_cfg_override=model_cfg_override)
File "/content/motion-diffusion-model/human_body_prior/tools/model_loader.py", line 37, in exprdir2model
### assert len(available_ckpts) > 0, ValueError('No checkpoint found at {}'.format(model_snapshots_dir))
AssertionError: No checkpoint found at ./human_body_prior/src/support_data/dowloads/vposer_v2_05/snapshots
@tutnyal you should follow the instructions provided to download those files (both the model and ckpt files).
Dear Nima Thanks for the excellent work. I found your comment on MDM, that you have implemented the conversion from MDM xyz output (results.npy) into SMPL (AMASS) format.
I have MDM results.npy that contains xyz pose offsets. I have install and configured VPoser, but now I am not sure how to use mdm_motion2smpl.py to convert this results.npy into AMASS npz file. Is there any example code that I have missed?
BTW, Do I need GPU for this conversion?
Thanks for the support Mohammad