Real Time Speech Enhancement in the Waveform Domain (Interspeech 2020)We provide a PyTorch implementation of the paper Real Time Speech Enhancement in the Waveform Domain. In which, we present a causal speech enhancement model working on the raw waveform that runs in real-time on a laptop CPU. The proposed model is based on an encoder-decoder architecture with skip-connections. It is optimized on both time and frequency domains, using multiple loss functions. Empirical evidence shows that it is capable of removing various kinds of background noise including stationary and non-stationary noises, as well as room reverb. Additionally, we suggest a set of data augmentation techniques applied directly on the raw waveform which further improve model performance and its generalization abilities.
Other
1.62k
stars
299
forks
source link
Question about the implemented of SpectralConvergengeLoss #150
Does the norm calculation include the batch dimension is reasonable, or should be calculated for each utterance then sum/mean the losses across batch dimension? https://github.com/facebookresearch/denoiser/blob/f98f16ce55fbf23e60cfd12e0cc3f5964f5b8dba/denoiser/stft_loss.py#L51