Closed R7788380 closed 2 years ago
I do this because if I use torchaudio to generate audio truncation may occur. You also can use scipy to generate estimated audio.
I don't understand why truncation occurs when using torchaudio to generate(torchaudio.save
) audio.
Can you give an example?
So if I use scipy or some other function (ex. soundfile) I can avoid truncation?
I think you can use the following code without any questions.
from scipy.io import wavfile
wavfile.write('a.wav', 8000, np.asarray(data, dtype=np.int16)
Thank you so much!
Thank you for your contribution !
I have a question in the line 44 of dualrnn_test_wav.py.
You normalize the DPRNN prediction like below:
I'm a little confused about here, because you don't do any input normalization in training pipeline, why you do here?