Mael-zys / T2M-GPT

(CVPR 2023) Pytorch implementation of “T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations”
https://mael-zys.github.io/T2M-GPT/
Apache License 2.0
595 stars 52 forks source link

training from scratch #29

Closed deeptimhe closed 1 year ago

deeptimhe commented 1 year ago

For VQVAE, I have reproduced the test results from the released checkpoints, but I cannot train one with similar performance by myself.

I use the following command: python3 train_vq.py \ --batch-size 256 \ --lr 2e-4 \ --total-iter 300000 \ --lr-scheduler 200000 \ --nb-code 512 \ --down-t 2 \ --depth 3 \ --dilation-growth-rate 3 \ --out-dir output \ --dataname t2m \ --vq-act relu \ --quantizer ema_reset \ --loss-vel 0.5 \ --recons-loss l1_smooth \ --exp-name VQVAE

After training, I only got FID ~= 0.11 for both net.last & net_best_fid

Ssstirm commented 1 year ago

Did you test that on the testset? I got the same result as the paper

deeptimhe commented 1 year ago

Did you test that on the testset? I got the same result as the paper

Thank you. It seems the number comes from val set.