Mathux / TEMOS

Official PyTorch implementation of the paper "TEMOS: Generating diverse human motions from textual descriptions", ECCV 2022 (Oral)
https://mathis.petrovich.fr/temos/
MIT License
378 stars 25 forks source link

Evaluation with past research #3

Closed wooheum-xin closed 2 years ago

wooheum-xin commented 2 years ago

Hi dear authors, I would like to start by saying thank you for your amazing work. Did you re-implement past research(Lin et. al./ JL2P/ Ghosh et al.)? How can I evaluate them with your code?

Mathux commented 2 years ago

Hello @L190201301,

I will put more information in the README, in the next few weeks. (I may also distributes their motions.)

To tell you what I use for comparison with previsous works (while waiting for me to update the README):

To get the motions as npy files, I follow each README.md to do the installation, then I do:

I will also update the eval.py script, and upload the script to create a table with all the results.

dwro0121 commented 2 years ago

Hello, I also have a question with evaluation.

As far as I'm concerned, previous studies have not provided the results of the variable sequence length study.

Are all the results presented in the paper conducted using a fixed length?

Mathux commented 2 years ago

Hello,

That's a very good question. Actually, I am not doing something ideal, we can discuss a bit if you think of something better.

After generating motions (from any method), for each sequence inthe test set, I load the GT motion and the generated one. Then, I calculate the maximum number of frames in common (minimum length of both), and compute the metrics (APE root joints etc) on those frames. (This is an average, so sometimes it will compute the metrics with less elements).

TEMOS always generates motions of appropriate length, as this is one of the inputs to the model (all poses are generated in one pass). Previous work are generally auto-regressive, and are trained to generate a fixed number of poses at a time (which requires several passes through the model). When I evaluate, I take what they generate.

If you are interested in the code, you can check it out:

Mathux commented 2 years ago

Hi,

I update the README.md. You can use the command line: bash prepare/download_previous_works.sh to download the motion generated from previous work. Then python evaluate.py folder=previous_work/ghosh to evaluate on ghosh et al.