Open shawnbzhang opened 3 years ago
I bring up this issue because with its real-time capabilities, this may pose a problem when streaming the input.
This mismatch is cause padding and transposed convolution, you should set segment_size % hop_size == 0
(segment_size + (nfft - hop_size can get segmentsize % hop_size frames melspectrum). In other words, one frame represent hop_size sampling points.
@Miralan Thank you for the response, but I'm a bit confused. I understand that segment_size % hop_size == 0
will make the input and generated output waveforms match lengths. Is there a way to do this in the general inference.py
, or should I just zero-pad the input so that segment_size % hop_size == 0
?
@Miralan Thank you for the response, but I'm a bit confused. I understand that
segment_size % hop_size == 0
will make the input and generated output waveforms match lengths. Is there a way to do this in the generalinference.py
, or should I just zero-pad the input so thatsegment_size % hop_size == 0
?
When you inference both from wav and mel, you do not need to set segment_size % hop_size == 0
, it does not matter . Zero pad for segment_size
is also ok for match segment_size
and hop_size
, when you training. But I think 71168 is too bigger, It may take lots of gpu memory which will cause less batch_size.
If the streaming you mentioned meant to immediately feed a portion of the output from the 1st stage model to HiFi-GAN, you could add padding to match the length of the output audio, but the audio with a break in the padding part will be synthesized. I would like to recommend cutting the output mel-spectrogram from the 1st stage model to match the length of the output audio before feeding to HiFi-GAN.
@jik876
For some more context, I am doing research on neural voice conversion, which is why I was really impressed with your non-autoregressive vocoder. In the streaming context, ideally I would like 10ms chunk from my source speaker's audio to translate to a 10ms chunk of the generated speaker's audio. Therefore, I guess it makes sense for me to stream in chunk_size % hop_size == 0
source inputs to get the respective outputs. What do you think?
Is that right? And again thank you for your work and insight.
Thank you. It is correct to adjust chunk_size to be divided by hop_size. I don't know what sample rate you're using, but 10ms chunk seems too short to generate high quality audio considering the receptive field of the generator.
@shawnbzhang have you solved it? And how to do?
@jik876 For some more context, I am doing research on neural voice conversion, which is why I was really impressed with your non-autoregressive vocoder. In the streaming context, ideally I would like 10ms chunk from my source speaker's audio to translate to a 10ms chunk of the generated speaker's audio. Therefore, I guess it makes sense for me to stream in
chunk_size % hop_size == 0
source inputs to get the respective outputs. What do you think?Is that right? And again thank you for your work and insight.
It seem impossible to have chunk 10ms, by my knowledge if we use sr=22050 then 10ms have only 220 samples, it is even smaller than the window length ?
Running through your pre-trained models, I found that generated audio does not exactly match the input in duration length. For example,
As you can see, there is a mismatch of
71334
and71168
. What is happening, and why is this the case? Is there a way that I can change it so that the input and output shapes match?Thank you.
Edit: So I was checking training, and if the target segment_size is a multiple of 256 (
hop_size
), theny_g_hat = generator(x)
will also have the exact number.