JeremyCJM / DiffSHEG

[CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation
https://jeremycjm.github.io/proj/DiffSHEG/
BSD 3-Clause "New" or "Revised" License
112 stars 9 forks source link

Could you share the pretrained EmbeddingNet in "utils/cal_metrics.py"? Thanks. #14

Closed bob35buaa closed 2 months ago

bob35buaa commented 2 months ago

In your paper, you use FMD, FGD, and FED to evaluate the Fr ́ echt distance between generated and real data in gesture feature space. I want to follow your work and compute these metrics to evaluate my method. Could you share the pretrained EmbeddingNet in "utils/cal_metrics.py"? Thanks.

JeremyCJM commented 2 months ago

Hi bob, the pretrained autoencoder weights can be downloaded here: https://drive.google.com/file/d/1Wm2WMlacwStFaciCh7UlhQeyA3E2yEnj/view?usp=sharing . The evaluation code can be directly found in *trainer.py. cal_metrics.py is not used in our repo.

bob35buaa commented 1 month ago

Hi bob, the pretrained autoencoder weights can be downloaded here: https://drive.google.com/file/d/1Wm2WMlacwStFaciCh7UlhQeyA3E2yEnj/view?usp=sharing . The evaluation code can be directly found in *trainer.py. cal_metrics.py is not used in our repo.

Thank you very much!

bob35buaa commented 1 month ago

Hi bob, the pretrained autoencoder weights can be downloaded here: https://drive.google.com/file/d/1Wm2WMlacwStFaciCh7UlhQeyA3E2yEnj/view?usp=sharing . The evaluation code can be directly found in *trainer.py. cal_metrics.py is not used in our repo.

Hi Jeremy. I also want to ask you about the training processing for FGD. I plan to train the autoencoder on my own dataset (SMPL-X format) for another task. Can you share the training scripts or the hyper-parameters about training, such as optimizer and learning rate?