pyannote / pyannote-audio

Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
http://pyannote.github.io
MIT License
5.49k stars 725 forks source link

How to finetune clustering and embedding models in speaker diarization pipeline? #1578

Open rkapur102 opened 7 months ago

rkapur102 commented 7 months ago

Hi @hbredin , how can I finetune the clustering and embedding models in the SpeakerDiarization pipeline? All tutorials only refer to finetuning the segmentation model. Any help would be appreciated.

github-actions[bot] commented 7 months ago

Thank you for your issue. We found the following entry in the FAQ which you may find helpful:

Feel free to close this issue if you found an answer in the FAQ.

If your issue is a feature request, please read this first and update your request accordingly, if needed.

If your issue is a bug report, please provide a minimum reproducible example as a link to a self-contained Google Colab notebook containing everthing needed to reproduce the bug:

Providing an MRE will increase your chance of getting an answer from the community (either maintainers or other power users).

Companies relying on pyannote.audio in production may contact me via email regarding:

This is an automated reply, generated by FAQtory

hbredin commented 7 months ago

Fine-tuning speaker embedding is currently not implemented as pyannote relies on external libraries for that part.

You can however tune the clustering threshold to your use case. This tutorial may help.

rkapur102 commented 7 months ago

@hbredin is there a way to finetune the speaker embedding model separately and then pass it into the pyannote pipeline? I saw it is the ECAPA-TDNN model. Can that be finetuned? It seems it can be trained from scratch, but I'm looking to finetune it. Can I finetune it on my own and pass in that new finetuned embedding model to the "embedding" parameter in SpeakerDiarization()? Do you know of any tutorials on this?

hbredin commented 7 months ago

I think this is a question for speechbrain project.

picheny-nyu commented 6 months ago

If I understand this correctly (and I may not) Diarization pipeline 3.0 seems to use the WeChat embeddings; older versions of the pipeline seem to use the Speechbrain version. I am a bit confused as the Plaquet paper which otherwise seems to be a good description of the pipeline still uses the Speechbrain embeddings, but maybe things changed when the pipeline became available on huggingface......

hbredin commented 6 months ago

Plaquet's paper comes with a companion repository (https://github.com/FrenchKrab/IS2023-powerset-diarization) that does include a pipeline based on speechbrain ECAPA-TDNN.

stale[bot] commented 4 weeks ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.