pyannote / pyannote-audio

Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
http://pyannote.github.io
MIT License
6.41k stars 789 forks source link

Possible to use reference speaker embeddings in Pyannote diarization pipeline? #1750

Open Arche151 opened 3 months ago

Arche151 commented 3 months ago

Hey everyone,

I am trying to use Pyannote with Whisper for transcribing meetings between my business partner and me, but the result hasn't been that great, since about 50% of the times, the wrong speaker is assigned.

So, I thought about ways to enhance the accuracy of the diarization and found the Pyannote API docs for creating Voiceprints from reference audios and then using them in the diarization pipeline.

But since I want to do everything locally, I searched for the open-source Pyannote equivalent of the Voiceprint feature, which seems to be https://huggingface.co/pyannote/embedding

The problem: While I was able to extract embeddings from reference audios of my business partner and me, I have no idea how to use them in the diarization pipeline.

I didn't find any docs about this approach and was wondering, if it's even possible or only available in the Pyannote API.

I would greatly appreciate any kind of help/clarification :)

hbredin commented 3 months ago

The diarization pipeline has a return_embeddings option that might help you in this endeavour:

https://github.com/pyannote/pyannote-audio/blob/0ea4c025ee048c36d74ccdb8b3f4939a27ad729b/pyannote/audio/pipelines/speaker_diarization.py#L103-L106

Arche151 commented 3 months ago

@hbredin Omg, can't believe, I got an answer from Mr. Pyannote himself.

I will try out your suggested approach and report back. Thanks a lot! :)