This repository is a refactored / updated version of stable-audio-tools
which is an open-source code for audio/music generative models
originally by Stability AI.
This repository contains the following additional features:
stable-audio-tools
for improved readability and usability.Stable Audio 2.0
.Stable Audio Open 1.0
.Stability AI has now open-sourced the pre-trained model for Stable Audio.
For detailed instructions on how to use Stable Audio Open, please refer to this document if you are interested.
To run the training scripts or inference code, you'll need to clone this repository, navigate to the root directory, and then execute the pip
command as follow:
$ git clone https://github.com/yukara-ikemiya/friendly-stable-audio-tools.git
$ cd friendly-stable-audio-tools
$ pip install .
$ # you may need to execute this to avoid Accelerate import error
$ pip uninstall -y transformer-engine
To simplify setting up the training environment, I recommend to use container systems like Docker
or Singularity
instead of installing dependencies on each GPU machine. Below are the steps for creating Docker and Singularity containers.
All example scripts are stored at the container folder.
Please be sure that Docker and Singularity are installed in advance.
$ # create a Docker image
$ NAME=friendly-stable-audio-tools
$ docker build -t ${NAME} -f ./container/${NAME}.Dockerfile .
$ # convert a Docker image to a Singularity container
$ singularity build friendly-stable-audio-tools.sif docker-daemon://friendly-stable-audio-tools
By running the above script, friendly-stable-audio-tools.sif
should be created in the working directory.
A basic Gradio interface is provided to test out trained models.
For example, to create an interface for the stable-audio-open-1.0
model, once you've accepted the terms for the model on Hugging Face, you can run:
$ python3 ./run_gradio.py --pretrained-name stabilityai/stable-audio-open-1.0
If you need more detailed instruction on Stable Audio Open
, I recommend referring to the Gradio interface part of the Stable Audio Open
documentation.
The run_gradio.py
script accepts the following command line arguments:
--pretrained-name
PRETRAINED_NAME (optional)
stabilityai/stable-audio-open-1.0
)model.safetensors
over model.ckpt
in the repomodel-config
and ckpt-path
are ignored.--model-config
MODEL_CONFIG (optional)
--ckpt-path
CKPT_PATH (optional)
--pretransform-ckpt-path
PRETRANSFORM_CKPT_PATH (optional)
--username
USERNAME / --password
PASSWORD (optional)
--model-half
(optional)
--tmp-dir
TMP_DIR (optional)
The training code also requires a Weights & Biases account to log the training outputs and demos. Create an account and log in with:
$ wandb login
Or you can also pass an API key as an environment variable WANDB_API_KEY
.
(You can obtain the API key from https://wandb.ai/authorize after logging in to your account.)
$ WANDB_API_KEY="12345x6789y..."
This method is convenient when you want to execute the code using containers such as Docker or Singularity.
Before starting your training run, you have to prepare the following two configuration files.
For more information about those, refer to the Configuration section below.
To start a training run, run the train.py
script in the repo root with:
$ python3 train.py --dataset-config /path/to/dataset/config --model-config /path/to/model/config --name my_experiment
The --name
parameter will set the project name for your Weights and Biases run.
Fine-tuning involves resuming a training run from a pre-trained checkpoint.
train.py
with the --ckpt-path
flag.train.py
with the --pretrained-ckpt-path
flag.stable-audio-tools
uses PyTorch Lightning to facilitate multi-GPU and multi-node training.
When a model is being trained, it is wrapped in a "training wrapper", which is a pl.LightningModule
that contains all of the relevant objects needed only for training. That includes things like discriminators for autoencoders, EMA copies of models, and all of the optimizer states.
The checkpoint files created during training include this training wrapper, which greatly increases the size of the checkpoint file.
unwrap_model.py
takes in a wrapped model checkpoint and save a new checkpoint file including only the model itself.
That can be run with from the repo root with:
$ python3 unwrap_model.py --model-config /path/to/model/config --ckpt-path /path/to/wrapped/ckpt.ckpt --name /new/path/to/new_ckpt_name
Unwrapped model checkpoints are required for:
Training and inference code for stable-audio-tools
is based around JSON configuration files that define model hyperparameters, training settings, and information about your training dataset.
The model config file defines all of the information needed to load a model for training or inference. It also contains the training configuration needed to fine-tune a model or train from scratch.
The following properties are defined in the top level of the model configuration:
model_type
"autoencoder", "diffusion_uncond", "diffusion_cond", "diffusion_cond_inpaint", "diffusion_autoencoder", "lm"
.sample_size
sample_rate
audio_channels
model
model_type
training
model_type
. Provides parameters for training as well as demos.stable-audio-tools
currently supports two kinds of data sources: local directories of audio files, and WebDataset datasets stored in Amazon S3. More information can be found in the dataset config documentation
Additional optional flags for train.py
include:
--config-file
train.py
from a directory other than the repo root--pretransform-ckpt-path
--save-dir
--checkpoint-every
--batch-size
--num-gpus
--num-nodes
--accum-batches
--strategy
deepspeed
will enable DeepSpeed ZeRO Stage 2.ddp
if --num_gpus
> 1, else None--precision
--num-workers
--seed
Stable Audio 2.0
To use CLAP encoder for conditioning music generation, you have to prepare a pretrained checkpoint file of CLAP.
music_audioset_epoch_15_esc_90.14.pt
)
from the LAION CLAP repository.model config
file of Stable Audio 2.0 as follows= stable_audio_2_0.json =
...
"model": {
...
"conditioning": {
"configs": [
{
...
"config": {
...
"clap_ckpt_path": "ckpt/clap/music_audioset_epoch_15_esc_90.14.pt",
...
Since Stable Audio uses text prompts as condition for music generation, you have to prepare them as metadata in addition to audio data.
When using a dataset in a local environment, I support the use of metadata in JSON format as follows.
prompt
required for training of Stable Audio.= music_2.json =
{
"prompt": "This is an electronic song sending positive vibes."
}
.
└── dataset/
├── music_1.wav
├── music_1.json
├── music_2.wav
├── music_2.json
└── ...
As the 1st stage of Stable Audio 2.0, you'll train a VAE-GAN which is a compression model for audio signal.
The model config file for a VAE-GAN is place in the configs directory. Regarding dataset configuration, please prepare a dataset config file corresponding to your own datasets.
Once you prepare configuration files, you can execute a training job like this:
CONTAINER_PATH="/path/to/sif/friendly-stable-audio-tools.sif"
ROOT_DIR="/path/to/friendly-stable-audio-tools/"
DATASET_DIR="/path/to/your/dataset/"
OUTPUT_DIR="/path/to/output/directory/"
MODEL_CONFIG="stable_audio_tools/configs/model_configs/autoencoders/stable_audio_2_0_vae.json"
DATASET_CONFIG="stable_audio_tools/configs/dataset_configs/local_training_example.json"
BATCH_SIZE=10 # WARNING : This is batch size per GPU
WANDB_API_KEY="12345x6789y..."
PORT=12345
# Singularity container case
# NOTE: Please change each configuration as you like
singularity exec --nv --pwd $ROOT_DIR -B $ROOT_DIR -B $DATASET_DIR \
--env WANDB_API_KEY=$WANDB_API_KEY \
${CONTAINER_PATH} \
torchrun --nproc_per_node gpu --master_port ${PORT} \
${ROOT_DIR}/train.py \
--dataset-config ${DATASET_CONFIG} \
--model-config ${MODEL_CONFIG} \
--name "vae_training" \
--num-gpus 8 \
--batch-size ${BATCH_SIZE} \
--num-workers 8 \
--save-dir ${OUTPUT_DIR}
As described in the unwrapping-a-model section, after completing the training of VAE, you need to unwrap the model checkpoint for using the next stage training.
CKPT_PATH="/path/to/wrapped_ckpt/last.ckpt"
# NOTE: file extension ".ckpt" will be automatically added to the end of OUTPOUT_DIR name
OUTPUT_PATH="/path/to/output_name/unwrapped_last"
singularity exec --nv --pwd $ROOT_DIR -B $ROOT_DIR \
--env WANDB_API_KEY=$WANDB_API_KEY \
${CONTAINER_PATH} \
torchrun --nproc_per_node gpu --master_port ${PORT} \
${ROOT_DIR}/unwrap_model.py \
--model-config ${MODEL_CONFIG} \
--ckpt-path ${CKPT_PATH} \
--name ${OUTPUT_PATH}
Once you finished the VAE training, you might want to test and evaluate reconstruction quality of the trained model.
I support reconstruction of audio files in a directory with reconstruct_audios.py
,
and you can use the reconstructed audios for your evaluation.
AUDIO_DIR="/path/to/original_audio/"
OUTPUT_DIR="/path/to/output_audio/"
FRAME_DURATION=1.0 # [sec]
OVERLAP_RATE=0.01
BATCH_SIZE=50
singularity exec --nv --pwd $ROOT_DIR -B $ROOT_DIR -B $DATASET_DIR \
--env WANDB_API_KEY=$WANDB_API_KEY \
${CONTAINER_PATH} \
torchrun --nproc_per_node gpu --master_port ${PORT} \
${ROOT_DIR}/reconstruct_audios.py \
--model-config ${MODEL_CONFIG} \
--ckpt-path ${UNWRAP_CKPT_PATH} \
--audio-dir ${AUDIO_DIR} \
--output-dir ${OUTPUT_DIR} \
--frame-duration ${FRAME_DURATION} \
--overlap-rate ${OVERLAP_RATE} \
--batch-size ${BATCH_SIZE}
As the 2nd stage of Stable Audio 2.0, you'll train a DiT which is a generative model in latent domain.
Before this part, please make sure that
Now, you can train a DiT model as follows:
CONTAINER_PATH="/path/to/sif/friendly-stable-audio-tools.sif"
ROOT_DIR="/path/to/friendly-stable-audio-tools/"
DATASET_DIR="/path/to/your/dataset/"
OUTPUT_DIR="/path/to/output/directory/"
MODEL_CONFIG="stable_audio_tools/configs/model_configs/txt2audio/stable_audio_2_0.json"
DATASET_CONFIG="stable_audio_tools/configs/dataset_configs/local_training_example.json"
# Pretrained checkpoint of VAE (Stage-1) model
PRETRANSFORM_CKPT="/path/to/vae_ckpt/unwrapped_last.ckpt"
BATCH_SIZE=10 # WARNING : This is batch size per GPU
WANDB_API_KEY="12345x6789y..."
PORT=12345
singularity exec --nv --pwd $ROOT_DIR -B $ROOT_DIR -B $DATASET_DIR \
--env WANDB_API_KEY=$WANDB_API_KEY \
${CONTAINER_PATH} \
torchrun --nproc_per_node gpu --master_port ${PORT} \
${ROOT_DIR}/train.py \
--dataset-config ${DATASET_CONFIG} \
--model-config ${MODEL_CONFIG} \
--pretransform-ckpt-path ${PRETRANSFORM_CKPT} \
--name "dit_training" \
--num-gpus ${NUM_GPUS} \
--batch-size ${BATCH_SIZE} \
--save-dir ${OUTPUT_DIR}