Mael-zys / T2M-GPT

(CVPR 2023) Pytorch implementation of “T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations”
https://mael-zys.github.io/T2M-GPT/
Apache License 2.0
587 stars 52 forks source link

continuous text-to motion generation #18

Closed borishanzju closed 1 year ago

borishanzju commented 1 year ago

Hi, I see MDM method generate motion from single textual prompt. But your method can generate motions from continuous texts. How do you split the Humanml3D dataset and KIT dataset

Jiro-zhang commented 1 year ago

Hello, both MDM and our method are trained and tested under the same setting (following Guo's method https://ericguo5513.github.io/text-to-motion/). MDM presents the motions generated by the shorter textual prompt, and we provide the motions generated by the longer textual prompt (not continuous texts, and the longer text description is more difficult to generate). All textual prompts are from the test set.

borishanzju commented 1 year ago

So how do you associate the shorter textual prompt into longer textual prompt. Do you release your text?

Mael-zys commented 1 year ago

So how do you associate the shorter textual prompt into longer textual prompt.

Hello, we didn't associate the shorter textual prompt into longer textual prompt. For both training and testing, we directly use the original text descriptions of both datasets. You can find the datasets in this repo: https://github.com/EricGuo5513/HumanML3D

I see MDM method generate motion from single textual prompt.

For this question, MDM also uses the same datasets to perform Text-to-Motion experiments, so it can also generate motions from longer texts. Just their demos are generated with shorter text, so it may be a little confusing.