Hello, I've been trying your project, and it went successful on the t2m task. But I was wondering how to make use of the a2m models and configs that you've released, so that motions could be generated on the a2m task, just like generated from demo.py on the t2m task.
If I simply run the demo.py with config changed, I encounter error like below:
motion-latent-diffusion-main/mld/models/architectures/mld_denoiser.py", line 253, in forward
uncond, output = output.chunk(2)
ValueError: not enough values to unpack (expected 2, got 1)
and after that there was a bunch of error like:
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [220,0,0], thread: [96,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
Hello, I've been trying your project, and it went successful on the t2m task. But I was wondering how to make use of the a2m models and configs that you've released, so that motions could be generated on the a2m task, just like generated from demo.py on the t2m task. If I simply run the demo.py with config changed, I encounter error like below:
motion-latent-diffusion-main/mld/models/architectures/mld_denoiser.py", line 253, in forward uncond, output = output.chunk(2) ValueError: not enough values to unpack (expected 2, got 1) and after that there was a bunch of error like: ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [220,0,0], thread: [96,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.