Closed LXP-Never closed 5 months ago
I'm not sure I fully understand what you meant with "The TimeStretch method changed the tone". Could you please post an audio example with input (unprocessed) and outputs: one processed with TimeStretch in audiomentations and one with audio.a_speed? A zip file as attachment to the comment should work
I'm not sure I fully understand what you meant with "The TimeStretch method changed the tone". Could you please post an audio example with input (unprocessed) and outputs: one processed with TimeStretch in audiomentations and one with audio.a_speed? A zip file as attachment to the comment should work
speed_aug.zip The audio generated by the librosa method looks blurred, and it sounds like a loud noise. And ffmpeg just changed the speed
Thanks, now I have more insight! Yes, it's no secret that librosa's time stretch implementation doesn't give a high quality sounding result for speech recordings, especially when the time stretch factor is extreme (in your example it was a 2x speedup). Under the hood it uses phase vocoding. Phase vocoding can degrade audio quality by "smearing" transient sounds, altering the timbre of harmonic sounds, and distorting pitch modulations. This may result in a loss of sharpness, clarity, or naturalness in the transformed audio.
librubberband or ffmpeg do indeed give better-sounding time stretching outputs. I believe ffmpeg's atempo (exposed as a_speed in the python API you showcased here) is based on WSOLA under the hood.
There's a github issue about adding support for more time stretching methods, like WSOLA and ESOLA, to audiomentations:
I have updated the documentation regarding this: https://iver56.github.io/audiomentations/waveform_transforms/time_stretch/
The TimeStretch method changed the tone, and I think currently I have only found that ffmpeg does not change the tone