Non-parallel voice conversion based on vector-quantized variational autoencoder with adversarial learning
$ git clone https://github.com/k2kobayashi/crank.git
$ cd crank/tools
$ make
$ sudo apt install ffmpeg # mosnet dependency
You can access several converted audio samples of VCC 2018 dataset in the URL.
crank has prepared recipe for Voice Conversion Challenge 2020. In crank recipe, there are 7 stages to implement non-parallel voice conversion.
Note that dataset is only released for the participants (2020/05/26).
$ cd egs/vaevc/vcc2020v1
$ mkdir downloads && cd downloads
$ mv <path_to_zip>/vcc2020_{training,evaluation}.zip downloads
$ unzip vcc2020_training.zip
$ unzip vcc2020_evaluation.zip
Because the challenge defines its training and evaluation set, we have initially put configuration files. So, you need to run from 2nd stage.
$ ./run.sh --n_jobs 10 --stage 2 --stop_stage 5
where the n_jobs
indicates the number of CPU cores used in the training.
Configurations are defined in conf/mlfb_vqvae.yml
.
Followings are explanation of representative parameters.
When you create your own recipe, be carefull to set parameters for feature extraction such as fs
, fftl
, hop_size
, framems
, shiftms
, and mcep_alpha
. These parameters depend on sampling frequency.
You can choose feat_type
either mlfb
or mcep
.
If you choose mlfb
, the converted waveforms are generated by either GllifinLim vocoder or ParallelWaveGAN vocoder.
If you choose mcep
, the converted waveforms are generated by world vocoder (i.e., excitation generation and MLSA filtering).
We support training with vqvae
, lsgan
, cyclegan
, and stargan
using same generator network.
vqvae
: default vqvae settinglsgan
: vqvae with adversarial learningcyclegan
: vqvae with adevesarial learning and cyclic constraintsstargan
: vqvae with adevesarial learning similar to cycleganPlease copy template directory to start creation of your recipe.
$ cp -r egs/vaevc/template egs/vaevc/<new_recipe>
$ cd egs/vaevc/<new_recipe>
You need to put wav files appropriate directory.
You can choose either modifying download.sh
or putting wav files.
In either case, the wav files should be located in each speaker like following
<new_recipe>/downloads/wav/{spkr1, spkr2, ..., spkr3}/*.wav
.
If you modify downaload.sh
,
$ vim local/download.sh
If you put wav files,
$ mkdir downloads
$ mv <path_to_your_wav_directory> downloads/wav
$ touch downloads/.done
The initialization process generates kaldi-like scp files.
$ ./run.sh --stage 0 --stop_stage 1
Then you modify speaker-dependent parameters in conf/spkr.yml
using generated figures.
Page 20~22 in slide help you how to set these parameters.
After preparing configuration, you run it.
$ ./run.sh --stage 2 --stop_stage 7
Please cite this paper when you use crank.
K. Kobayashi, W-C. Huang, Y-C. Wu, P.L. Tobing, T. Hayashi, T. Toda,
"crank: an open-source software for nonparallel voice conversion based on vector-quantized variational autoencoder",
Proc. ICASSP, 2021. (accepted)
Thank you @kan-bayashi for lots of contributions and encouragement helps.
Kazuhiro Kobayashi @k2kobayashi [maintainer, design and development]
Wen-Chin Huang @unilight [maintainer, design and development]
Tomoki Toda [advisor]