t41372 / Open-LLM-VTuber

Talk to any LLM with fast hands-free voice interaction, Live2D taking face, and long-term memory running locally across platforms
MIT License
240 stars 21 forks source link
ai ai-vtuber ai-waifu neuro-sama

Open-LLM-VTuber

:warning: This project is in its early stages and is currently under active development. Features are unstable, code is messy, and breaking changes will occur. The main goal of this stage is to build a minimum viable prototype using technologies that are easy to integrate.

:warning: This project currently has a lot of issues on Windows. In theory, it should all work, but many people using Windows are having many problems with many dependencies now. Those issues will probably be fixed in the future, but Windows support currently requires testing and debugging. If you have a Mac or a Linux machine, use them instead for the time being. Join the Discord server if you are having trouble or just want to talk.

:warning: If you want to run this program on a server and access it remotely on your laptop, the microphone on the front end will only launch in a secure context (a.k.a. https or localhost). See MDN Web Doc. Therefore, you might want to either configure https with a reverse proxy or launch the front end locally and connects to the server via websocket (untested). Open the static/index.html with your browser and set the ws url on the page.

Open-LLM-VTuber allows you to talk to any LLM by voice (hands free) locally with a Live2D talking face. The LLM inference backend, speech recognition, and speech synthesizer are all designed to be swappable. This project can be configured to run offline on macOS, Linux, and Windows ().

Long-term memory with MemGPT can be configured to achieve perpetual chat, infinite* context length, and external data source.

This project started as an attempt to recreate the closed-source AI VTuber neuro-sama with open-source alternatives that can run offline on platforms other than Windows.

demo-image

https://github.com/t41372/Open-LLM-VTuber/assets/36402030/e8931736-fb0b-4cab-a63a-eea5694cbb83

Why this project and not other similar projects on GitHub?

Basic Goals

Target Platform

Recent Feature Updates

Implemented Features

Currently supported LLM backend

Currently supported Speech recognition backend

Currently supported Text to Speech backend

Fast Text Synthesis

Live2D Talking face

live2d technical details

Install & Usage

New installation instruction is being created here

Install FFmpeg on your computer.

Clone this repository.

You need to have Ollama or any other OpenAI-API-Compatible backend ready and running. If you want to use MemGPT as your backend, scroll down to the MemGPT section.

Prepare the LLM of your choice. Edit the BASE_URL and MODEL in the project directory's conf.yaml.

This project was developed using Python 3.10.13. I strongly recommend creating a virtual Python environment like conda for this project.

Run the following in the terminal to install the dependencies.

pip install -r requirements.txt # Run this in the project directory
# Install Speech recognition dependencies and text-to-speech dependencies according to the instructions below

This project, by default, launches the audio interaction mode, meaning you can talk to the LLM by voice, and the LLM will talk back to you by voice.

Edit the conf.yaml for configurations. You can follow the configuration used in the demo video.

If you want to use live2d, run server.py to launch the WebSocket communication server and open the URL you set in conf.yaml (http://HOST:PORT). By default, go to http://localhost:8000.

Run launch.pymain.py with Python. Some models will be downloaded during your first launch, which may take a while.

Also, the live2D models have to be fetched through the internet, so you'll have to keep your internet connected before the index.html is fully loaded with your desired live2D model.

Update

Back up the configuration files conf.yaml if you've edited them, and then update the repo. Or just clone the repo again and make sure to transfer your configurations. The configuration file will sometimes change because this project is still in its early stages. Be cautious when updating the program.

Install Speech Recognition

Edit the ASR_MODEL settings in the conf.yaml to change the provider.

Here are the options you have for speech recognition:

FunASR (local) (Runs very fast even on CPU. Not sure how they did it)

Faster-Whisper (local)

WhisperCPP (local) (runs super fast on a Mac if configured correctly)

WhisperCPP coreML configuration:

Whisper (local)

AzureASR (online, API Key required)

Install Speech Synthesis (text to speech)

Install the respective package and turn it on using the TTS_MODEL option in conf.yaml.

pyttsx3TTS (local, fast)

meloTTS (local, fast)

barkTTS (local, slow)

cosyvoiceTTS (local, slow)

edgeTTS (online, no API key required)

AzureTTS (online, API key required)

Azure API for Speech Recognition and Speech to Text, API key needed

Create a file named api_keys.py in the project directory, paste the following text into the file, and fill in the API keys and region you gathered from your Azure account.

# Azure API key
AZURE_API_Key="YOUR-API-KEY-GOES-HERE"

# Azure region
AZURE_REGION="YOUR-REGION"

# Choose the Text to speech model you want to use
AZURE_VOICE="en-US-AshleyNeural"

If you're using macOS, you need to enable the microphone permission of your terminal emulator (you run this program inside your terminal, right? Enable the microphone permission for your terminal). If you fail to do so, the speech recognition will not be able to hear you because it does not have permission to use your microphone.

MemGPT

MemGPT integration is very experimental and requires quite a lot of setup. In addition, MemGPT requires a powerful LLM (larger than 7b and quantization above Q5) with a lot of token footprint, which means it's a lot slower. MemGPT does have its own LLM endpoint for free, though. You can test things with it. Check their docs.

This project can use MemGPT as its LLM backend. MemGPT enables LLM with long-term memory.

To use MemGPT, you need to have the MemGPT server configured and running. You can install it using pip or docker or run it on a different machine. Check their GitHub repo and official documentation.

:warning: I recommend you install MemGPT either in a separate Python virtual environment or in docker because there is currently a dependency conflict between this project and MemGPT (on fast API, it seems). You can check this issue Can you please upgrade typer version in your dependancies #1382.

Here is a checklist:

Issues

PortAudio Missing

Running in a Container

:warning: This is highly experimental, totally untested (because I use a mac), and totally unfinished. If you are having trouble with all the dependencies, however, you can try to have trouble with the container instead, which is still a lot of trouble but is a different set of trouble, I guess.

Current issues:

Setup guide:

  1. Review conf.yaml before building (currently burned into the image, I'm sorry):

    • Set MIC_IN_BROWSER to true (required because your mic doesn't live inside the container)
  2. Build the image:

    docker build -t open-llm-vtuber .

    (Grab a drink, this may take a while)

  3. Run the container:

    docker run -it --net=host -p 8000:8000 open-llm-vtuber "sh"
  4. Inside the container, run:

    • server.py
    • Open the frontend website in your browser
    • launch.py main.py (Use screen, tmux, or similar to run server.py and main.py simultaneously)
  5. Open localhost:8000 to test

Development

(this project is in the active prototyping stage, so many things will change)

Some abbreviations used in this project:

Add support for new TTS providers

  1. Implement TTSInterface defined in tts/tts_interface.py.
  2. Add your new TTS provider into tts_factory: the factory to instantiate and return the tts instance.
  3. Add configuration to conf.yaml. The dict with the same name will be passed into the constructor of your TTSEngine as kwargs.

Add support for new Speech Recognition provider

  1. Implement ASRInterface defined in asr/asr_interface.py.
  2. Add your new ASR provider into asr_factory: the factory to instantiate and return the ASR instance.
  3. Add configuration to conf.yaml. The dict with the same name will be passed into the constructor of your class as kwargs.

Add support for new LLM provider

  1. Implement LLMInterface defined in llm/llm_interface.py.
  2. Add your new LLM provider into llm_factory: the factory to instantiate and return the LLM instance.
  3. Add configuration to conf.yaml. The dict with the same name will be passed into the constructor of your class as kwargs.

Acknowledgement

Awesome projects I learned from