DEPRECATION WARNING
This network has known stability issues on convergence while training. We highly recommend using our newer and stabler implementation, available at UttaranB127/speech2affective_gestures.
This is the readme to use the official code for the paper Text2Gestures: A Transformer Network for Generating Emotive Body Gestures for Virtual Agents. Please use the following citation if you find our work useful:
@inproceedings{bhattacharya2021text2gestures,
author = {Uttaran Bhattacharya and Nicholas Rewkowski and Abhishek Banerjee and Pooja Guhan and Aniket Bera and Dinesh Manocha},
title = {Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents},
booktitle = {2021 {IEEE} Conference on Virtual Reality and 3D User Interfaces (IEEE VR)},
publisher = {{IEEE}},
year = {2021}
}
Our scripts have been tested on Ubuntu 18.04 LTS with
We use $BASE to refer to the base directory for this project (the directory containing main.py
). Change present working directory to $BASE.
conda create t2g-env python=3.7
conda activate t2g-env
pip install -r requirements.txt
Note: You might need to manually uninstall and reinstall matplotlib
and kiwisolver
for them to work.
numpy
for torch
to work.We have scraped the full dataset and made it available at this link.
If downloading from our anonymous link, unzip the downloaded file to a directorty named "data", located at the same level at the project root (i.e., the project root and the data are sibling directories).
We also use the NRC-VAD lexicon to obtain the VAD representations of the words in the text. It can be downloaded from the original web page, or directly using this link. Unzip the downloaded zip file in the same "data" directory.
Run the main.py
file with the appropriate command line arguments.
python main.py <args list>
The full list of arguments is available inside main.py
.
For any argument not specificed in the command line, the code uses the default value for that argument.
On running main.py
, the code will train the network and generate sample gestures post-training.
We also provide a pretrained model for download at this link. If using this model, save it inside the directory $BASE/models/mpi
(create the directory if it does not exist). Set the command-line argument --train
to False
to skip training and use this model directly for evaluation. The generated samples are stored in the automatically created render
directory. We generate all the 145 test samples by deafult and also store the corresponding ground truth samples for comparison. We have tested that the samples, stored in .bvh
files, are compatible with blender.