descriptinc / melgan-neurips

GAN-based Mel-Spectrogram Inversion Network for Text-to-Speech Synthesis
MIT License
948 stars 218 forks source link

How can I synthesize my own text to speech? #11

Open ghost opened 4 years ago

binarythinktank commented 4 years ago

I was wondering this too. Iv successfully trained it and got good samples, but how do I tts using this output?

ViktorIgeland commented 4 years ago

Neural text-to-speech is most of the time done in two steps: feature prediction and voice synthesis. First you have a feature predictor that transforms the text into some features e.g. a mel-spectrogram. These features can then be used by the synthesizer to generate audio.

MelGAN is a synthesizer so to go from text to speech you would need to combine it with a model that converts text into mel-spectrograms. One such model is e.g. Tacotron2, have a look at: https://github.com/NVIDIA/tacotron2

Wenqikry commented 4 years ago

hi,@ViktorIgeland. In tacotron2 + melgan, the way tacotron2 extracts the mel spectrum is different from the way melgan is. Will it affect the results?

ViktorIgeland commented 4 years ago

Hi @Wenqikry, Yes, if your models are trained on different types of spectrograms it will have an impact on the results. If you don't need the speed of MelGAN you can try using Nvidia's WaveGlow, as it's trained on the same spectrogram as their Tacotron2.

Wenqikry commented 4 years ago

@ViktorIgeland, okay,thanks, I will try it.

casper-hansen commented 4 years ago

Hi @Wenqikry, Yes, if your models are trained on different types of spectrograms it will have an impact on the results. If you don't need the speed of MelGAN you can try using Nvidia's WaveGlow, as it's trained on the same spectrogram as their Tacotron2.

So how can we use MelNet with the same performance, i.e. how can we reproduce the results of the paper. Do you know if this is possible? And then extend it to custom audio files?

Do we have any information on how these mel-scale spectrograms are generated? Something we can reproduce and use in MelNet.

casper-hansen commented 4 years ago

@Wenqikry did you figure out a good way to produce mel spectrograms?

Wenqikry commented 4 years ago

@casperbh96 Sorry, I haven't found it yet

Liujingxiu23 commented 4 years ago

@Wenqikry Have you tried https://github.com/Rayhane-mamah/Tacotron-2 or https://github.com/NVIDIA/tacotron2 to train log-mels? Combine with the Melgan? Do you have any experiences? I used https://github.com/Rayhane-mamah/Tacotron-2 , change feat to log-mel as this repo, not do clip_out, but the result is very bad, I cat not find any wrong...

Wenqikry commented 4 years ago

@Liujingxiu23 Sorry,i haven't tried...

Mariaa98 commented 4 years ago

I trained the model well on a dataset, now I want to give him a Mel spectrogram as an input to synthesis the audio. I looked at the log folder, I found many .pt files.

Anyone can help?

binarythinktank commented 4 years ago

@Mariaa98 if you figure it out let me know, i have tried with 3 different data scientists and none of them could get a functional TTS script from this. we ended up going with a different model.

BuaaAlban commented 4 years ago

@Wenqikry Have you tried https://github.com/Rayhane-mamah/Tacotron-2 or https://github.com/NVIDIA/tacotron2 to train log-mels? Combine with the Melgan? Do you have any experiences? I used https://github.com/Rayhane-mamah/Tacotron-2 , change feat to log-mel as this repo, not do clip_out, but the result is very bad, I cat not find any wrong...

I have got some results by tacotron2 and melgan, I can figure out what the wav say , but it's not good as the demos