yanggeng1995 / vae_tacotron

MIT License
51 stars 19 forks source link

Tacotron

An implementation of VAE Tacotron speech synthesis in TensorFlow. (https://arxiv.org/abs/1812.04342)

Quick Start

Installing dependencies

  1. Install Python 3.

  2. Install the latest version of TensorFlow for your platform. For better performance, install with GPU support if it's available. This code works with TensorFlow 1.3 and later.

  3. Install requirements:

    pip install -r requirements.txt
  4. Run the demo server:

    python3 demo_server.py --checkpoint /tmp/tacotron-20180906/model.ckpt
  5. Point your browser at localhost:9000

    • Type what you want to synthesize

Training

  1. Download a speech dataset.

    The following are supported out of the box:

    You can use other datasets if you convert them to the right format. See TRAINING_DATA.md for more info.

  2. Unpack the dataset into ~/tacotron

    After unpacking, your tree should look like this for LJ Speech:

    tacotron
     |- LJSpeech-1.1
         |- metadata.csv
         |- wavs

    or like this for Blizzard 2012:

    tacotron
     |- Blizzard2012
         |- ATrampAbroad
         |   |- sentence_index.txt
         |   |- lab
         |   |- wav
         |- TheManThatCorruptedHadleyburg
             |- sentence_index.txt
             |- lab
             |- wav
  3. Preprocess the data

    python3 preprocess.py --dataset ljspeech
    • Use --dataset blizzard for Blizzard data
  4. Train a model

    python3 train.py

    Tunable hyperparameters are found in hparams.py. You can adjust these at the command line using the --hparams flag, for example --hparams="batch_size=16,outputs_per_step=2". Hyperparameters should generally be set to the same values at both training and eval time. The default hyperparameters are recommended for LJ Speech and other English-language data. See TRAINING_DATA.md for other languages.

  5. Monitor with Tensorboard (optional)

    tensorboard --logdir ~/tacotron/logs-tacotron

    The trainer dumps audio and alignments every 1000 steps. You can find these in ~/tacotron/logs-tacotron.

  6. Synthesize from a checkpoint

    python3 demo_server.py --checkpoint ~/tacotron/logs-tacotron/model.ckpt-185000

    Replace "185000" with the checkpoint number that you want to use, then open a browser to localhost:9000 and type what you want to speak. Alternately, you can run eval.py at the command line:

    python3 eval.py --checkpoint ~/tacotron/logs-tacotron/model.ckpt-185000 --reference_audio='test.wav'
    

    If you set the --hparams flag when training, set the same value here.