iver56 / audiomentations

A Python library for audio data augmentation. Inspired by albumentations. Useful for machine learning.
https://iver56.github.io/audiomentations/
MIT License
1.76k stars 183 forks source link

Speed augmentation #310

Closed LXP-Never closed 5 months ago

LXP-Never commented 5 months ago

The TimeStretch method changed the tone, and I think currently I have only found that ffmpeg does not change the tone

from ffmpeg import audio

audio.a_speed(input_wav_path, speed=speed_rate, out_file=output_wav_path)
iver56 commented 5 months ago

I'm not sure I fully understand what you meant with "The TimeStretch method changed the tone". Could you please post an audio example with input (unprocessed) and outputs: one processed with TimeStretch in audiomentations and one with audio.a_speed? A zip file as attachment to the comment should work

ankerAITD commented 5 months ago

I'm not sure I fully understand what you meant with "The TimeStretch method changed the tone". Could you please post an audio example with input (unprocessed) and outputs: one processed with TimeStretch in audiomentations and one with audio.a_speed? A zip file as attachment to the comment should work

speed_aug.zip The audio generated by the librosa method looks blurred, and it sounds like a loud noise. And ffmpeg just changed the speed

iver56 commented 5 months ago

Thanks, now I have more insight! Yes, it's no secret that librosa's time stretch implementation doesn't give a high quality sounding result for speech recordings, especially when the time stretch factor is extreme (in your example it was a 2x speedup). Under the hood it uses phase vocoding. Phase vocoding can degrade audio quality by "smearing" transient sounds, altering the timbre of harmonic sounds, and distorting pitch modulations. This may result in a loss of sharpness, clarity, or naturalness in the transformed audio.

librubberband or ffmpeg do indeed give better-sounding time stretching outputs. I believe ffmpeg's atempo (exposed as a_speed in the python API you showcased here) is based on WSOLA under the hood.

There's a github issue about adding support for more time stretching methods, like WSOLA and ESOLA, to audiomentations:

https://github.com/iver56/audiomentations/issues/62

iver56 commented 5 months ago

I have updated the documentation regarding this: https://iver56.github.io/audiomentations/waveform_transforms/time_stretch/