This is the readme to use the official code for our AAAI 2024 paper "DanceAnyWay: Synthesizing Beat-Guided 3D Dances With Randomized Temporal Contrastive Learning". You can find the paper draft here. Camera-ready version coming soon!
RPS code is being released soon!
Our scripts have been tested on Ubuntu 20.04 LTS with
Clone this repository.
[Optional but recommended] Create a conda environment for the project and activate it:
conda create daw-env python=3.7
conda activate daw-env
Install PyTorch following the official instructions.
Install all other package requirements:
pip install -r requirements.txt
Note: You might need to manually uninstall and reinstall numpy
for torch
to work. You might need to manually uninstall and reinstall matplotlib
and kiwisolver
for them to work.
Download AIST++ dataset here and store it in data_preprocessing/Data
.
Run the following command in the data_preprocessing directory:
python process_aist_plusplus_final.py
Note: This data extraction may take a few hours depending on the configuration of the device.
Download the presequences file [here](Google drive link).
We provide the pretrained models [here](Google drive link). Save these models inside their respective train_results
directories as checkpoint.pt
.
In order to train the models, run the following command in each of the respective directories:
python train.py
Note: In order to change the parameters of the model or training, please modify the config file provided for each model.
In order to test the models on the test dataset, run the following command in each of the respective directories:
python test.py
Note: Results will be generated in a new directory named test_results
by default. This behavior may be changed as a command line argument.
In order to evaluate the model on in-the-wild music, run the following command in the dance_generator
directory:
python evaluate.py --music_file music.wav
Note: By default, the model will generate a dance for the first 7 seconds. For continuous generation, please run the following command:
python evaluate.py --music_file music.wav --infinite_gen True
Please use the following citation if you find our work useful:
@inproceedings{bhattacharya2024danceanyway,
author = {Bhattacharya, Aneesh and Paranjape, Manas and Bhattacharya, Uttaran and Bera, Aniket},
title = {DanceAnyWay: Synthesizing Beat-Guided 3D Dances With Randomized Temporal Contrastive Learning},
year = {2024},
publisher = {Association for the Advancement of Artificial Intelligence},
address = {New York, NY, USA},
booktitle = {Proceedings of the 38th Annual AAAI Conference on Artificial Intelligence},
series = {AAAI '24}
}