VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In the repository, I will introduce a VITS model for Japanese on pytorch version 2.0.0 that customed from VITS model.
We also provide the pretrained models.
VITS at training | VITS at inference |
---|---|
basic5000
dataset and move to jp_dataset
folder. # Preprocessing (g2p) for your own datasets. Preprocessed phonemes for Japanese dataset have been already provided.
python preprocess.py --text_index 1 --filelists filelists/jp_audio_text_train_filelist.txt filelists/jp_audio_text_val_filelist.txt filelists/jp_audio_text_test_filelist.txt
# JP Speech
python train.py -c configs/jp_base.json -m jp_base
To get pretrained model for Japanese:
sh startup.sh
See vits_apply.ipynb or run streamlit run app.py
to see demo on streamlit-share.