facebookresearch / denoiser

Real Time Speech Enhancement in the Waveform Domain (Interspeech 2020)We provide a PyTorch implementation of the paper Real Time Speech Enhancement in the Waveform Domain. In which, we present a causal speech enhancement model working on the raw waveform that runs in real-time on a laptop CPU. The proposed model is based on an encoder-decoder architecture with skip-connections. It is optimized on both time and frequency domains, using multiple loss functions. Empirical evidence shows that it is capable of removing various kinds of background noise including stationary and non-stationary noises, as well as room reverb. Additionally, we suggest a set of data augmentation techniques applied directly on the raw waveform which further improve model performance and its generalization abilities.
Other
1.65k stars 302 forks source link

Deepstream/GStreamer Pipeline #111

Open rbgreenway opened 2 years ago

rbgreenway commented 2 years ago

I would love to be able to incorporate your denoiser into a Deepstream/Gstreamer pipeline. In order to do this, I'd need to know how to get from raw audio data -> pre-processed network input tensor(s) and then how to post-process the output tensors. Can you point me to any resources/code that might help me figure this out? Also, if you think that this is an unworkable effort, please let me know. I'm quite fluid in Deepstream, Gstreamer, Cuda, and TensorRT, so I'm hoping that I'll be able to put together a shareable solution. BTW, I've tested your networks extensively, and they are very impressive. Thanks for all your hard work!

adefossez commented 2 years ago

You should have a look at the live denoising implementation in order to get a sense of how to work with the model in a streaming setting: https://github.com/facebookresearch/denoiser/blob/main/denoiser/live.py#L132

You can getting a pretrained model with the api in pretrained.py, then wrap it into a DemucsStreamer (https://github.com/facebookresearch/denoiser/blob/main/denoiser/live.py#L87) and then you just feed arbitrary chunks of audio and you get back whatever audio can be processed up to that point. Once you are done streaming, just call the flush() method to get back any remaining audio.

rbgreenway commented 2 years ago

@adefossez that sounds perfect. Thank you for your response, and for sharing this implementation of your Demucs network. Assuming I can get this to work, I'd really like to be able to get two outputs from the network (similar to your musical track separation work): 1 - a stream that's just voice and 2 - a stream that's everything else (i.e. no voice, just background). I'm guessing I might need to fine tune your network in order to do that...but first things first...I'll try to get the denoiser alone working first.

qalabeabbas49 commented 2 years ago

@rbgreenway were able to do this? I am planning to do something similar but totally new at this.