EricGuo5513 / momask-codes

Official implementation of "MoMask: Generative Masked Modeling of 3D Human Motions (CVPR2024)"
https://ericguo5513.github.io/momask/
MIT License
690 stars 56 forks source link

Replicating the results of the pre-trained models. #27

Open weihaosky opened 4 months ago

weihaosky commented 4 months ago

Thanks for releasing this amazing work! However, I cannot replicate the results of the pre-trained models using the provided code. The results after training with the provided code are

Murrol commented 4 months ago

Hi, thanks for your interest!

We are happy to figure out the problem together with you. Could you please provide the configures for training (opt file) and testing (scripts) for each stage?

weihaosky commented 4 months ago

I just use the provided code and commands without any change. vq training: opt.txt mtrans training: opt.txt rtrans training: opt.txt vq test: rvq_nq6.log mtrans test: evaluation.log

Murrol commented 4 months ago

I just use the provided code and commands without any change. vq training: opt.txt mtrans training: opt.txt rtrans training: opt.txt vq test: rvq_nq6.log mtrans test: evaluation.log

Hi wenhao, thanks for your info.

I've checked the configures and corrected the scripts in our README. We used --gamma 0.05 to train the rvq. I just re-trained the rvq and got the following results:

image

Hope you will find it useful.

weihaosky commented 4 months ago

I just use the provided code and commands without any change. vq training: opt.txt mtrans training: opt.txt rtrans training: opt.txt vq test: rvq_nq6.log mtrans test: evaluation.log

Hi wenhao, thanks for your info.

I've checked the configures and corrected the scripts in our README. We used --gamma 0.05 to train the rvq. I just re-trained the rvq and got the following results:

image

Hope you will find it useful.

Hi, after setting --gamma 0.05 I got a worse result: image

net_best_fid.tar final result, epoch 36
        FID: 0.040, conf. 0.000
        Diversity: 9.598, conf. 0.097
        TOP1: 0.506, conf. 0.002, TOP2. 0.696, conf. 0.002, TOP3. 0.793, conf. 0.002
        Matching: 3.027, conf. 0.008
        MAE:0.039, conf.0.000
aszxnm commented 3 months ago

I meet the same problem and get the same results with @weihaosky . Have you solved the problem? Thanks.

weihaosky commented 3 months ago

I meet the same problem and get the same results with @weihaosky . Have you solved the problem? Thanks.

No. I still cannot replicate the results.

Seoneun commented 2 months ago

Have you solved the problem now? @weihaosky @aszxnm Thanks.

JHang2020 commented 2 months ago

the same for me...

I meet the same problem and get the same results with @weihaosky . Have you solved the problem? Thanks.

No. I still cannot replicate the results.

wang-zm18 commented 1 week ago

The same for me, and another confusing problem is the MPJPE of the reconstruction setting on the kit dataset: 1719153702064

The command I run is python eval_t2m_vq.py --gpu_id 0 --name rvq_nq6_dc512_nc512_noshare_qdp0.2_k --dataset_name kit --ext rvq_nq6, and I used the default pretrained checkpoint.

Similarly, when I test it on the Humanml dataset, the results is : image

It is obvious that the MPJPE results on these two datasets differ by several orders of magnitude. Does anyone else have the same problem?

HitBadTrap commented 1 week ago

I just use the provided code and commands without any change. vq training: opt.txt mtrans training: opt.txt rtrans training: opt.txt vq test: rvq_nq6.log mtrans test: evaluation.log

Hi wenhao, thanks for your info. I've checked the configures and corrected the scripts in our README. We used --gamma 0.05 to train the rvq. I just re-trained the rvq and got the following results:

image

Hope you will find it useful.

Hi, after setting --gamma 0.05 I got a worse result: image

net_best_fid.tar final result, epoch 36
        FID: 0.040, conf. 0.000
        Diversity: 9.598, conf. 0.097
        TOP1: 0.506, conf. 0.002, TOP2. 0.696, conf. 0.002, TOP3. 0.793, conf. 0.002
        Matching: 3.027, conf. 0.008
        MAE:0.039, conf.0.000

Have you solved the problem now? @weihaosky @aszxnm

Thanks.

aszxnm commented 1 week ago

I just use the provided code and commands without any change. vq training: opt.txt mtrans training: opt.txt rtrans training: opt.txt vq test: rvq_nq6.log mtrans test: evaluation.log

Hi wenhao, thanks for your info. I've checked the configures and corrected the scripts in our README. We used --gamma 0.05 to train the rvq. I just re-trained the rvq and got the following results:

image

Hope you will find it useful.

Hi, after setting --gamma 0.05 I got a worse result: image

net_best_fid.tar final result, epoch 36
        FID: 0.040, conf. 0.000
        Diversity: 9.598, conf. 0.097
        TOP1: 0.506, conf. 0.002, TOP2. 0.696, conf. 0.002, TOP3. 0.793, conf. 0.002
        Matching: 3.027, conf. 0.008
        MAE:0.039, conf.0.000

Have you solved the problem now? @weihaosky @aszxnm

Thanks.

No. I still cannot replicate the results.

Murrol commented 1 week ago

Thank you all for your attempts to replicate the results. I just got some time to re-train the masked-transformer and res-transformer using our released code. Here are some results for your reference:

image

I used the rvq checkpoint I obtained here https://github.com/EricGuo5513/momask-codes/issues/27#issuecomment-1956264373. I used the following scripts to train the m-trans and r-trans:

python train_t2m_transformer.py --name mtrans_replicate --gpu_id 1 --dataset_name t2m --batch_size 64 --vq_name rvq_replicate
python train_res_transformer.py --name rtrans_replicate --gpu_id 2 --dataset_name t2m --batch_size 64 --vq_name rvq_replicate --cond_drop_prob 0.2 --share_weight

evaluation script:

python eval_t2m_trans_res.py --res_name rtrans_replicate --dataset_name t2m --name mtrans_replicate --gpu_id 1 --cond_scale 4 --time_steps 10 --ext evaluation_replicate

The above results are obtained from these scripts without any modification to this code base. The replicate experiments were done on a single RTX 2080 Ti GPU, torch==1.7.1. For the processed dataset cloned from the original HumanML3D project, please send inquiry to cguo2@ualberta.ca or ymu3@ualberta.ca

wang-zm18 commented 3 days ago

https://github.com/EricGuo5513/momask-codes/issues/27#issuecomment-2185009113 Does anybody knows the problem?

Murrol commented 3 days ago

The same for me, and another confusing problem is the MPJPE of the reconstruction setting on the kit dataset: 1719153702064

The command I run is python eval_t2m_vq.py --gpu_id 0 --name rvq_nq6_dc512_nc512_noshare_qdp0.2_k --dataset_name kit --ext rvq_nq6, and I used the default pretrained checkpoint.

Similarly, when I test it on the Humanml dataset, the results is : image

It is obvious that the MPJPE results on these two datasets differ by several orders of magnitude. Does anyone else have the same problem?

The scale is different. Check the Mean.py files.

wang-zm18 commented 3 days ago

I re-download the KIT-ML dataset from Link directly. While it still remains this problem. Maybe it is not the mean problem?Thank you in advance! @Murrol