ChenFengYe / motion-latent-diffusion

[CVPR 2023] Executing your Commands via Motion Diffusion in Latent Space, a fast and high-quality motion diffusion model
https://chenxin.tech/mld/
MIT License
586 stars 55 forks source link

How to use the a2m models/config? #55

Open ou524u opened 1 year ago

ou524u commented 1 year ago

Hello, I've been trying your project, and it went successful on the t2m task. But I was wondering how to make use of the a2m models and configs that you've released, so that motions could be generated on the a2m task, just like generated from demo.py on the t2m task. If I simply run the demo.py with config changed, I encounter error like below:

motion-latent-diffusion-main/mld/models/architectures/mld_denoiser.py", line 253, in forward uncond, output = output.chunk(2) ValueError: not enough values to unpack (expected 2, got 1) and after that there was a bunch of error like: ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [220,0,0], thread: [96,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.