juanmc2005 / diart

A python package to build AI-powered real-time audio applications
https://diart.readthedocs.io
MIT License
903 stars 76 forks source link

Implement voicefixer for audio enhancement #221

Open thieugiactu opened 7 months ago

thieugiactu commented 7 months ago

Is there any way to implement voicefixer to speaker diarization pipeline? The package takes a wav file as input and gives a upsampled 44100kHz wav file as output, but that could be easily modified to taking and giving audio numpy array. Since the speaker embeddings depend greatly on the quality of the input audio and in the real world environment, there are a lot of factor that can affect the quality of the audio such as the quality of the recording device, speaker voice change overtime,... so I think having some audio quality enhancement is a must.

juanmc2005 commented 7 months ago

Hi @thieugiactu, that's an interesting idea.

To do this in a streaming way we would need access to a pre-trained model for the enhancement task, then implement a SpeechEnhancementModel and SpeechEnhancement block. This would allow you to build a pipeline where you call SpeechEnhancement before sending it to SpeakerSegmentation and SpeakerEmbedding.

In order to make this compatible with SpeakerDiarization (or any pipeline for that matter), we could implement a method like add_audio_preprocessors() to prepend any audio transformations (e.g. enhancement, resampling, volume change, etc.)

thieugiactu commented 7 months ago

I will give it a try. If I have any questions regarding to diart, can I directly ask them under this issue?

juanmc2005 commented 7 months ago

@thieugiactu sure! Feel free to open a PR too, I'd be glad to discuss possible solutions to this

thieugiactu commented 7 months ago

This is what I've been doing so far. I re-used your code but replaced whisper model with wav2vec2 model for speech recognition since my pc couldn't handle whisper. Untitled Diagram The code worked but there are some adjustment could be made:

juanmc2005 commented 6 months ago

@thieugiactu something you could also do to reduce the inference time is to directly record audio at 44.1 khz. This way you avoid having to upsample in the first place

thaokimctu commented 6 months ago

@juanmc2005 thank you for your reply. Unfortunately the voicefixer is so unstable and I couldn't make it work properly. More often than not it would degrade the audio's quality even more.