Open ghost opened 4 years ago
Neural text-to-speech is most of the time done in two steps: feature prediction and voice synthesis. First you have a feature predictor that transforms the text into some features e.g. a mel-spectrogram. These features can then be used by the synthesizer to generate audio.
MelGAN is a synthesizer so to go from text to speech you would need to combine it with a model that converts text into mel-spectrograms. One such model is e.g. Tacotron2, have a look at: https://github.com/NVIDIA/tacotron2
hi,@ViktorIgeland. In tacotron2 + melgan, the way tacotron2 extracts the mel spectrum is different from the way melgan is. Will it affect the results?
Hi @Wenqikry, Yes, if your models are trained on different types of spectrograms it will have an impact on the results. If you don't need the speed of MelGAN you can try using Nvidia's WaveGlow, as it's trained on the same spectrogram as their Tacotron2.
@ViktorIgeland, okay,thanks, I will try it.
Hi @Wenqikry, Yes, if your models are trained on different types of spectrograms it will have an impact on the results. If you don't need the speed of MelGAN you can try using Nvidia's WaveGlow, as it's trained on the same spectrogram as their Tacotron2.
So how can we use MelNet with the same performance, i.e. how can we reproduce the results of the paper. Do you know if this is possible? And then extend it to custom audio files?
Do we have any information on how these mel-scale spectrograms are generated? Something we can reproduce and use in MelNet.
@Wenqikry did you figure out a good way to produce mel spectrograms?
@casperbh96 Sorry, I haven't found it yet
@Wenqikry Have you tried https://github.com/Rayhane-mamah/Tacotron-2 or https://github.com/NVIDIA/tacotron2 to train log-mels? Combine with the Melgan? Do you have any experiences? I used https://github.com/Rayhane-mamah/Tacotron-2 , change feat to log-mel as this repo, not do clip_out, but the result is very bad, I cat not find any wrong...
@Liujingxiu23 Sorry,i haven't tried...
I trained the model well on a dataset, now I want to give him a Mel spectrogram as an input to synthesis the audio. I looked at the log folder, I found many .pt files.
Anyone can help?
@Mariaa98 if you figure it out let me know, i have tried with 3 different data scientists and none of them could get a functional TTS script from this. we ended up going with a different model.
@Wenqikry Have you tried https://github.com/Rayhane-mamah/Tacotron-2 or https://github.com/NVIDIA/tacotron2 to train log-mels? Combine with the Melgan? Do you have any experiences? I used https://github.com/Rayhane-mamah/Tacotron-2 , change feat to log-mel as this repo, not do clip_out, but the result is very bad, I cat not find any wrong...
I have got some results by tacotron2 and melgan, I can figure out what the wav say , but it's not good as the demos
I was wondering this too. Iv successfully trained it and got good samples, but how do I tts using this output?