PromtEngineer / Verbi

A modular voice assistant application for experimenting with state-of-the-art transcription, response generation, and text-to-speech models. Supports OpenAI, Groq, Elevanlabs, CartesiaAI, and Deepgram APIs, plus local models via Ollama. Ideal for research and development in voice technology.
MIT License
123 stars 35 forks source link

VERBI - Voice Assistant πŸŽ™οΈ

GitHub Stars GitHub Forks GitHub Issues GitHub Pull Requests License

Motivation ✨✨✨

Welcome to the Voice Assistant project! πŸŽ™οΈ Our goal is to create a modular voice assistant application that allows you to experiment with state-of-the-art (SOTA) models for various components. The modular structure provides flexibility, enabling you to pick and choose between different SOTA models for transcription, response generation, and text-to-speech (TTS). This approach facilitates easy testing and comparison of different models, making it an ideal platform for research and development in voice assistant technologies. Whether you're a developer, researcher, or enthusiast, this project is for you!

Features 🧰

Project Structure πŸ“‚

voice_assistant/
β”œβ”€β”€ voice_assistant/
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ audio.py
β”‚   β”œβ”€β”€ api_key_manager.py
β”‚   β”œβ”€β”€ config.py
β”‚   β”œβ”€β”€ transcription.py
β”‚   β”œβ”€β”€ response_generation.py
β”‚   β”œβ”€β”€ text_to_speech.py
β”‚   β”œβ”€β”€ utils.py
β”‚   β”œβ”€β”€ local_tts_api.py
β”‚   β”œβ”€β”€ local_tts_generation.py
β”œβ”€β”€ .env
β”œβ”€β”€ run_voice_assistant.py
β”œβ”€β”€ setup.py
β”œβ”€β”€ requirements.txt
└── README.md

Setup Instructions πŸ“‹

Prerequisites βœ…

Step-by-Step Instructions πŸ”’

  1. πŸ“₯ Clone the repository
   git clone https://github.com/PromtEngineer/Verbi.git
   cd Verbi
  1. 🐍 Set up a virtual environment

    Using venv:

    python -m venv venv
    source venv/bin/activate  # On Windows use `venv\Scripts\activate`

Using conda:

    conda create --name verbi python=3.10
    conda activate verbi
  1. πŸ“¦ Install the required packages
   pip install -r requirements.txt
  1. πŸ› οΈ Set up the environment variables

Create a .env file in the root directory and add your API keys:

    OPENAI_API_KEY=your_openai_api_key
    GROQ_API_KEY=your_groq_api_key
    DEEPGRAM_API_KEY=your_deepgram_api_key
    LOCAL_MODEL_PATH=path/to/local/model
  1. 🧩 Configure the models

Edit config.py to select the models you want to use:

    class Config:
        # Model selection
        TRANSCRIPTION_MODEL = 'groq'  # Options: 'openai', 'groq', 'deepgram', 'fastwhisperapi' 'local'
        RESPONSE_MODEL = 'groq'       # Options: 'openai', 'groq', 'ollama', 'local'
        TTS_MODEL = 'deepgram'        # Options: 'openai', 'deepgram', 'elevenlabs', 'local', 'melotts'

        # API keys and paths
        OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
        GROQ_API_KEY = os.getenv("GROQ_API_KEY")
        DEEPGRAM_API_KEY = os.getenv("DEEPGRAM_API_KEY")
        LOCAL_MODEL_PATH = os.getenv("LOCAL_MODEL_PATH")

If you are running LLM locally via Ollama, make sure the Ollama server is runnig before starting verbi.

  1. πŸ”Š Configure ElevenLabs Jarvis' Voice

    • Voice samples here.
    • Follow this link to add the Jarvis voice to your ElevenLabs account.
    • Name the voice 'Paul J.' or, if you prefer a different name, ensure it matches the ELEVENLABS_VOICE_ID variable in the text_to_speech.py file.
  2. πŸƒ Run the voice assistant

   python run_voice_assistant.py
  1. 🎀 Install FastWhisperAPI

    Optional step if you need a local transcription model

    Clone the repository

      cd..
      git clone https://github.com/3choff/FastWhisperAPI.git
      cd FastWhisperAPI

    Install the required packages:

      pip install -r requirements.txt

    Run the API

      fastapi run main.py

    Alternative Setup and Run Methods

    The API can also run directly on a Docker container or in Google Colab.

    Docker:

    Build a Docker container:

      docker build -t fastwhisperapi .

    Run the container

      docker run -p 8000:8000 fastwhisperapi

    Refer to the repository documentation for the Google Colab method: https://github.com/3choff/FastWhisperAPI/blob/main/README.md

  2. 🎀 Install Local TTS - MeloTTS

    Optional step if you need a local Text to Speech model

    Install MeloTTS from Github

    Use the following link to install MeloTTS for your operating system.

    Once the package is installed on your local virtual environment, you can start the api server using the following command.

      python voice_assistant/local_tts_api.py

    The local_tts_api.py file implements as fastapi server that will listen to incoming text and will generate audio using MeloTTS model. In order to use the local TTS model, you will need to update the config.py file by setting:

      TTS_MODEL = 'melotts'        # Options: 'openai', 'deepgram', 'elevenlabs', 'local', 'melotts'

    You can run the main file to start using verbi with local models.

Model Options βš™οΈ

Transcription Models 🎀

Response Generation Models πŸ’¬

Text-to-Speech (TTS) Models πŸ”Š

Detailed Module Descriptions πŸ“˜

Roadmap πŸ›€οΈπŸ›€οΈπŸ›€οΈ

Here's what's next for the Voice Assistant project:

  1. Add Support for Streaming: Enable real-time streaming of audio input and output.
  2. Add Support for ElevenLabs and Enhanced Deepgram for TTS: Integrate additional TTS options for higher quality and variety.
  3. Add Filler Audios: Include background or filler audios while waiting for model responses to enhance user experience.
  4. Add Support for Local Models Across the Board: Expand support for local models in transcription, response generation, and TTS.

Contributing 🀝

We welcome contributions from the community! If you'd like to help improve this project, please follow these steps:

  1. Fork the repository.
  2. Create a new branch (git checkout -b feature-branch).
  3. Make your changes and commit them (git commit -m 'Add new feature').
  4. Push to the branch (git push origin feature-branch).
  5. Open a pull request detailing your changes.

Star History ✨✨✨

Star History Chart