collabora / WhisperFusion

WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.
1.45k stars 101 forks source link

WhisperFusion

WhisperFusion

Seamless conversations with AI (with ultra-low latency)

Welcome to WhisperFusion. WhisperFusion builds upon the capabilities of the WhisperLive and WhisperSpeech by integrating Mistral, a Large Language Model (LLM), on top of the real-time speech-to-text pipeline. Both LLM and Whisper are optimized to run efficiently as TensorRT engines, maximizing performance and real-time processing capabilities. While WhiperSpeech is optimized with torch.compile.

Features

Hardware Requirements

The demo was run on a single RTX 4090 GPU. WhisperFusion uses the Nvidia TensorRT-LLM library for CUDA optimized versions of popular LLM models. TensorRT-LLM supports multiple GPUs, so it should be possible to run WhisperFusion for even better performance on multiple GPUs.

Getting Started

We provide a Docker Compose setup to streamline the deployment of the pre-built TensorRT-LLM docker container. This setup includes both Whisper and Phi converted to TensorRT engines, and the WhisperSpeech model is pre-downloaded to quickly start interacting with WhisperFusion. Additionally, we include a simple web server for the Web GUI.

docker compose build export MODEL=Phi-3-mini-4k-instruct #Phi-3-mini-128k-instruct or phi-2, By default WhisperFusion uses phi-2 docker compose up



- Start Web GUI on `http://localhost:8000`

**NOTE**

## Contact Us

For questions or issues, please open an issue. Contact us at:
marcus.edel@collabora.com, jpc@collabora.com,
vineet.suryan@collabora.com