Closed borishanzju closed 1 year ago
Hello, both MDM and our method are trained and tested under the same setting (following Guo's method https://ericguo5513.github.io/text-to-motion/). MDM presents the motions generated by the shorter textual prompt, and we provide the motions generated by the longer textual prompt (not continuous texts, and the longer text description is more difficult to generate). All textual prompts are from the test set.
So how do you associate the shorter textual prompt into longer textual prompt. Do you release your text?
So how do you associate the shorter textual prompt into longer textual prompt.
Hello, we didn't associate the shorter textual prompt into longer textual prompt. For both training and testing, we directly use the original text descriptions of both datasets. You can find the datasets in this repo: https://github.com/EricGuo5513/HumanML3D
I see MDM method generate motion from single textual prompt.
For this question, MDM also uses the same datasets to perform Text-to-Motion experiments, so it can also generate motions from longer texts. Just their demos are generated with shorter text, so it may be a little confusing.
Hi, I see MDM method generate motion from single textual prompt. But your method can generate motions from continuous texts. How do you split the Humanml3D dataset and KIT dataset