Open hbredin opened 9 months ago
Sorry for late response.
My experiment code is built upon a unwarpped version of 3.1 pipeline only for cpu, which includes two mentioned separated onnx model (ResNet backbone and the final FC). The musk pooling is implemented using Numpy. Thus, the code can't be embedded into the pipeline directly. I plan to test a version compatible with the pipeline soon.
I think the key point is the line 322 in pipelines/speaker_diarization.py, where the same wave data is yielded three times. It would be more effient to yield used_mask
with a shape like (spk, 1, num_frames). This adjustment would allow the model to infer all of these together without some big modifications to the control flow..
Thanks. Will also look into this myself.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
@hbredin @mengjie-du Hello, I recently realized that this could be avoided and while searching through the issues i've found out you already initiated a discussion. Basically, I've slightly changed the for-loop in https://github.com/pyannote/pyannote-audio/blob/a39991b92db46127ad3a3ec87cd0a933b18c8013/pyannote/audio/pipelines/speaker_diarization.py#L342 to skip batch items with all-zero masks (i.e. when at least one of the 3 local speakers is inactive, which turns out to happen for most cases) and already reduced the latency by 20-25%.
I have two questions:
thanks
@hbredin @mengjie-du Hello, I recently realized that this could be avoided and while searching through the issues i've found out you already initiated a discussion. Basically, I've slightly changed the for-loop in
to skip batch items with all-zero masks (i.e. when at least one of the 3 local speakers is inactive, which turns out to happen for most cases) and already reduced the latency by 20-25%.
How do you skip it and reduce the latency? Could you please share your code for study?
Is there a reason why this isn't active? I'm very interested in making this work (because in my current project I want to bring down latency as much as possible) - I could maybe step in and start working on this immediately
Hey @nikosanto13, thanks for your message.
The main reason is a lack of resources (understand: time) on my side. Another reason is that pyannote started as an academic research project in which I tend to focus on improving accuracy rather than efficiency.
Note that the solution suggested by @mengjie-du (splitting the model in two parts) has been partially implemented here already.
I am not sure I'll be able to prioritize reviewing PRs on this particular aspect in the near future, though...
@hbredin I see, thanks for the update.
I'll create a fork where I'll finish the partial implementation for pyannote/wespeaker-voxceleb-resnet34-LM
and add a couple of other changes that can significantly improve the latency of speechbrain/spkrec-ecapa-voxceleb
, which is:
Anyways, I'll be glad to contribute if you find time in the future to welcome PRs in that front. Btw, props for the project - it helped me a ton.
@foreverhell stay tuned for the fork. I'll push the changes there.
It has been noticed that the 3.1 pipeline efficiency suffers from speaker embedding inference. With the default config, every 10s chunk has to undergo inference 3 times by the embedding model. It proves effective by separating the whole embedding model pipeline into the resnet backbone and the mask pooling. With this modification, every chunk only needs to be inferred one time through the backbone, bringing almost 3x speedup in my experiment. Furthermore, cache inference strategy helps a lot as well, given the default overlapped ratio of 90%.
Originally posted by @mengjie-du in https://github.com/pyannote/pyannote-audio/issues/1621#issuecomment-1918314722
Hey @mengjie-du, that's a nice idea. Would you contribute this to the pyannote.audio codebase? I tried to send you an email at the address mentioned in this paper but received an error message in return -- so I am taking my chance here.