Maybe the fastest voice style transfer with reasonable result ?
Inspired by the paper A Neural Algorithm of Artistic Style , the idea of Neural Voice Transfer
aims at "using Obama's voice to sing songs of Beyoncé" or something related.
We aim to:
sudo apt-get install libav-tools
)Some of other projects with audio results are as below.
boy.wav
and girl.wav
to generate audio, the result faces the same problem. You can hear the comparison at Stairway2Nightcall, the audio for comparison is downloaded from Dmitry Ulyanov's website.Net1 classifier
and Net2 synthesizer
and combine them togetherTo sum up, our results is far better than the original random CNN
results, which use the same dataset (only two audio) as we did. For those pre-trained deep neural network based on huge dataset, our results is comparable, and can be traind in 5 minutes, without using any outer dataset.(But still, all these conclusion are based on human taste.)
You can listen to my current result now ! It's on soundcloud, link1, link2.
The generated spectrogram compared with content
and style
.
Compare the spectrogram of gen
with content
and style
(X axis represents Time Domain
, Y axis represents Frequency Domain
), we can find that:
content
, and the gap along frequency axis, which determines the voice texture
to a great extent, is more alike to the style.pip install -r requirements.txt
# remove `CUDA_VISIBLE_DEVICES` when use CPU, though it will be slow.
CUDA_VISIBLE_DEVICES=0 python train.py -content input/boy18.wav -style input/girl52.wav
Tips: change 3x1
CONV to 3x3
CONV can get smoother generated spectrogram.
gram
of random CNN output really works ?Below is my experiments result of using texture gram
after 1-layer RandomCNN to capture speaker identity by putting them as the only feature in a simple nearest neighbor speaker identification system. The table shows the result of speaker identification accuracy of this system over the first 15 utterances of 30 first speakers of the VCTK dataset, along with 100 utterances of 4 first speakers.
Speakers | Train/Test | Accuracy |
---|---|---|
30 | 270/180 | 45.6% |
4 | 240/160 | 92.5% |
It seems texture gram along time-axis
really captured something, you can check it by:
python vctk_identify