Open Arche151 opened 3 months ago
The diarization pipeline has a return_embeddings
option that might help you in this endeavour:
return_embeddings=True
on your reference audio files to get corresponding embeddings@hbredin Omg, can't believe, I got an answer from Mr. Pyannote himself.
I will try out your suggested approach and report back. Thanks a lot! :)
Hey everyone,
I am trying to use Pyannote with Whisper for transcribing meetings between my business partner and me, but the result hasn't been that great, since about 50% of the times, the wrong speaker is assigned.
So, I thought about ways to enhance the accuracy of the diarization and found the Pyannote API docs for creating Voiceprints from reference audios and then using them in the diarization pipeline.
But since I want to do everything locally, I searched for the open-source Pyannote equivalent of the Voiceprint feature, which seems to be https://huggingface.co/pyannote/embedding
The problem: While I was able to extract embeddings from reference audios of my business partner and me, I have no idea how to use them in the diarization pipeline.
I didn't find any docs about this approach and was wondering, if it's even possible or only available in the Pyannote API.
I would greatly appreciate any kind of help/clarification :)