SayaSS / vits-finetuning

Fine-Tuning your VITS model using a pre-trained model
MIT License
551 stars 86 forks source link

text cleaner from https://github.com/CjangCjengh/vits

original repo: https://github.com/jaywalnut310/vits

Online training and inference

colab

See vits-finetuning

How to use

(Suggestion) Python == 3.7

Only Japanese datasets can be used for fine-tuning in this repo.

Clone this repository

git clone https://github.com/SayaSS/vits-finetuning.git

Install requirements

pip install -r requirements.txt

Download pre-trained model

If you need to customize "n_speakers", please replace the pre-trained model with these two.

Create datasets

dataset/001.wav|10|こんにちは。

For complete examples, please see filelists/miyu_train.txt and filelists/miyu_val.txt.

Preprocess

python preprocess.py --filelists path/to/filelist_train.txt path/to/filelist_val.txt

Edit "training_files" and "validation_files" in configs/config.json

Train

# Mutiple speakers
python train_ms.py -c configs/config.json -m checkpoints