It would be great to have a demo that enables natural voice conversations with omni-mini-2 using WebRTC technology. This would allow users to interact with the model through speech in real-time, both locally and in cloud environments.
Existing Implementation Reference
I already have a working implementation for omni-mini-1 on Hugging Face Spaces that demonstrates this capability:
The port from omni-mini-1 to omni-mini-2 should be relatively straightforward given the existing implementation. This could be a valuable addition to enhance the project's interactive capabilities.
Would this be a feature the maintainers are interested in adding to the project?
Overview
It would be great to have a demo that enables natural voice conversations with omni-mini-2 using WebRTC technology. This would allow users to interact with the model through speech in real-time, both locally and in cloud environments.
Existing Implementation Reference
I already have a working implementation for omni-mini-1 on Hugging Face Spaces that demonstrates this capability:
https://github.com/user-attachments/assets/040ec236-b336-4a14-880a-f9cde82e63e1
Key Benefits
Technical Resources
Potential Implementation
The port from omni-mini-1 to omni-mini-2 should be relatively straightforward given the existing implementation. This could be a valuable addition to enhance the project's interactive capabilities.
Would this be a feature the maintainers are interested in adding to the project?