Svito-zar / speech-driven-hand-gesture-generation-demo

This repository contains the gesture generation model from the paper "Moving Fast and Slow" (https://www.tandfonline.com/doi/full/10.1080/10447318.2021.1883883) trained on the English dataset
https://svito-zar.github.io/audio2gestures/
Apache License 2.0
25 stars 4 forks source link

Training on English data #6

Closed pjyazdian closed 3 years ago

pjyazdian commented 3 years ago

Hi, Thank you for providing your implementation. I want to train the network using Trinity dataset. However, the official implementation is based on Japanese dataset. Also, in preprocessing part of your code, many things are hard coded based on that dataset. Could you please provide the training implementation for the English dataset too?

Thanks

Payam

Svito-zar commented 3 years ago

Hi @pjyazdian , We have recently applied this model to the Trinity dataset for the GENEA Challenge. The implementation is available here: https://github.com/GestureGeneration/Speech_driven_gesture_generation_with_autoencoder/tree/GENEA_2020

pjyazdian commented 3 years ago

Hi @pjyazdian , We have recently applied this model to the Trinity dataset for the GENEA Challenge. The implementation is available here: https://github.com/GestureGeneration/Speech_driven_gesture_generation_with_autoencoder/tree/GENEA_2020

Hi Taras,

Thank you for quick response.