Mael-zys / T2M-GPT

(CVPR 2023) Pytorch implementation of “T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations”
https://mael-zys.github.io/T2M-GPT/
Apache License 2.0
598 stars 52 forks source link

Can not reproduce the reported results. #61

Closed blue-blue272 closed 8 months ago

blue-blue272 commented 1 year ago

I load "net_last.pth" for VQ-VAE and "net_best_fid.pth" for the Transformer in 'VQTransformer_corruption05', and run the GPT_eval_multi.py code, and I can only achieve about 0.22 FID, which is higher than the reported 0.116. Can you reproduce the results with the provided weights?

czc567 commented 11 months ago

I encountered similar problem. I used VQ_eval.py to evaluate the 'net_best_fid.pth' provided by the author and found that the FID was 0.278, not 0.070 as in the paper.

Chatonz commented 9 months ago

I encountered an issue during HumanML3d data conversion, may I ask how you resolved it?

OrigamiStationery commented 6 months ago

I load "net_last.pth" for VQ-VAE and "net_best_fid.pth" for the Transformer in 'VQTransformer_corruption05', and run the GPT_eval_multi.py code, and I can only achieve about 0.22 FID, which is higher than the reported 0.116. Can you reproduce the results with the provided weights?

Can't reproduce the reported metric and got 0.22 FID too TAT. Wondering what's wrong with my testing command. Plz lmk if you have solved this problem. Desperate for help.