Closed seungwonpark closed 5 years ago
Running another experiment by setting mel_fmax=11025.0
.
I had to run preprocess.py
again since all mel-spectrograms need to be calculated again.
@seungwonpark Actually this is how vocoder works efficiently, we always consider frequency between 0 to 8000 from wavenet to wavernn all vocoder models in between this frequency range, this helps model to consider vocal frequency (bandwidth allocated for a single voice-frequency transmission channel is usually 4 kHz
) over other frequencies.
Per the Nyquist–Shannon sampling theorem, the sampling frequency (8 kHz) must be at least twice the highest component of the voice frequency via appropriate filtering prior to sampling at discrete times (4 kHz) for effective reconstruction of the voice signal.
So 8kHz is enough to model any voice.
Meanwhile we do lose some environmental crispness by doing this, but you only notice a minute difference when you heard sound with Good noise cancellation headphone.
@rishikksh20 Thanks for sharing your insight! I will be doing an ablation study on this, but I think we can close this issue for now since it's not really critical, as you've explained.
Looks like waveglow's default configuration doesn't allow mel-spectrogram to represent all range of frequency (0~11025Hz): https://github.com/NVIDIA/waveglow/blob/master/config.json
This is a plot of
librosa.filters.mel(22050, 1024, 80, fmin=0.0, fmax=8000.0)
.I think was the reason why waveglow and our implementation of melgan doesn't look to generate high-frequency audio.