TheTempAccount / Co-Speech-Motion-Generation

Freeform Body Motion Generation from Speech
197 stars 25 forks source link
co-speech-gesture generation-algorithms gesture-controller talking-head

Freeform Co-Speech Gesture Generation

The repo for work "Free-form Co-Speech Gesture Generation"

Video Demo

Data & Pretrained model

Avaliable through

Unzip everything in pose_dataset, then change the Data.data_root in src/config/*.json. You should be seeing directory structure like this:

pose_dataset
|-videos
|   |-Speaker_A
|   |-Speaker_B
|   |-...
|   |-test_audios
|-ckpt

The rest of the data will be updated after I finish checking the annotations.

Inference

Generated gestures for an example audio clip:

bash demo.sh ../sample_audio/clip000040_ozfGHONpdTA.wav ../sample_audio/clip000040_ozfGHONpdTA.TextGrid

Visualise the generated motions:

bash visualse.sh

Generate gestures for a speaker in test_audios:

cd src
bash infer.sh  \
        pose_dataset/ckpt/ckpt-99.pth \
        pose_dataset/ckpt/freeMo.json \
        <post_fix> \
        <speaker_name>

The results will be saved as "pose_dataset/videos/test_audios//*_.json", including the json file of 64 randomly generated gesture sequences for every audio.

To visualise the results, run

bash visualise/visualise_all.sh <speaker_name> <post_fix>

Remember to change the file path in all files.

Training

bash train.sh

For any problem, please let us know.