Open rbgreenway opened 2 years ago
You should have a look at the live denoising implementation in order to get a sense of how to work with the model in a streaming setting: https://github.com/facebookresearch/denoiser/blob/main/denoiser/live.py#L132
You can getting a pretrained model with the api in pretrained.py
, then wrap it into a DemucsStreamer (https://github.com/facebookresearch/denoiser/blob/main/denoiser/live.py#L87) and then you just feed arbitrary chunks of audio and you get back whatever audio can be processed up to that point. Once you are done streaming, just call the flush()
method to get back any remaining audio.
@adefossez that sounds perfect. Thank you for your response, and for sharing this implementation of your Demucs network. Assuming I can get this to work, I'd really like to be able to get two outputs from the network (similar to your musical track separation work): 1 - a stream that's just voice and 2 - a stream that's everything else (i.e. no voice, just background). I'm guessing I might need to fine tune your network in order to do that...but first things first...I'll try to get the denoiser alone working first.
@rbgreenway were able to do this? I am planning to do something similar but totally new at this.
I would love to be able to incorporate your denoiser into a Deepstream/Gstreamer pipeline. In order to do this, I'd need to know how to get from raw audio data -> pre-processed network input tensor(s) and then how to post-process the output tensors. Can you point me to any resources/code that might help me figure this out? Also, if you think that this is an unworkable effort, please let me know. I'm quite fluid in Deepstream, Gstreamer, Cuda, and TensorRT, so I'm hoping that I'll be able to put together a shareable solution. BTW, I've tested your networks extensively, and they are very impressive. Thanks for all your hard work!