Welcome to BabbleBeaver, an open-source conversational AI platform designed with privacy and flexibility in mind. BabbleBeaver leverages the power of Large Language Models (LLM) to provide customizable and isolated conversational agents, encapsulated in Docker containers for easy deployment and management.
BabbleBeaver aims to democratize conversational AI, offering a plug-and-play solution that respects user privacy and data sovereignty. Built on top of FastAPI, BabbleBeaver facilitates rapid development and deployment of AI-powered chatbots with support for multiple, isolated LLM implementations.
Make sure you have Python installed on your machine. You can download and install Python from the official website: https://www.python.org/downloads/
Create a new directory for your project, then execute the following commands: git clone https://github.com/YourUsername/BabbleBeaver.git
followed by cd BabbleBeaver
to navigate to the project root directory.
Then execute the following commands which creates a virtual environment, activates it, then installs the necessary dependencies, followed by starting the application locally.
BabbleBeaver % virtualenv venv
BabbleBeaver % source venv/bin/activate
(venv) BabbleBeaver % pip install -r requirements.txt
(venv) BabbleBeave % uvicorn main:app --reload
To build the Docker image, then activate the docker environment and subsequently start the FastAPI server, run the following:
docker build -t babble-beaver .
docker run -p 8000:8000 babble-beaver
docker-compose up --build
At the time, BabbleBeaver is set up to work with LLMs available through several major proprietary providers such as OpenAI, Google, Mistral, Anthropic, and Cohere. It also supports LLM integration via open-source providers such as Ollama, OpenRouter, and HuggingFace. If you would like to integrate a specific model, please make sure to follow the given steps exactly:
Once you're in the project root directory, navigate to the model_config.ini
configuration file in the model_config
directory
For the new LLM which you'd like to use create a new entry in the configuration file exactly as follows with the following parameters. Keep in mind that none of the lines should be wrapped in quotes. The ones below are just to demonstrate how each line must be specified.
name of the model
]" - The model name must be enclosed within square brackets and the name must correspond to the actual model ID specified by the provider (REQUIRED)name of provider
" (REQUIRED)the model's context window
" - This needs to be an integer (REQUIRED)os.getenv("NAME OF API KEY IN .env file")
" - If working with Ollama, replace the right side of this expression with any random string. (REQUIRED)maximum number of tokens this LLM should generate per output.
" - This needs to be an integer (OPTIONAL)a floating point value dictating what kinds of outputs the LLM provides
" - This value needs to be in the range 0 <= temperature <= 1
(OPTIONAL)Once you've finished adding the model to the configuration file, head over to the main.py
file.
Here, if you scroll down to the /chatbot
endpoint, you will notice that you need to provide a few pieces of information pertaining to the model you want to use to get it up and running. The following are the ones you need to provide:
llm
- The name of the model as specified in the configuration file(An error will be thrown if there's a mismatch)provider
- The name of the provider as specified in the configuration file(An error will be thrown if there's a mismatch)tokenizer_function
- The tokenizer associated with this given model. Make sure that this is a function.completion_function
- All you need to do is fill in the body of this pre-created function with the API call to be made to get a response from the model. Make sure not to modify any of the parameters.use_initial_prompt
- Certain models may not support system instructions like gemini-1.0-pro
for instance, although most models do. You still need to specify this parameter(as a boolean) in the function call to set_model
to ensure that system instructions are passed in along with each API call made if you would like to do so.IMPORTANT NOTE: There are some special considerations especially if you're integrating an Ollama model such as the following:
ollama pull model_name
where model_name
corresponds to the model you want to integrateollama serve
to start Ollama up locally. After this point, you can use the model but keep in mind that when you're specifying the completion_function
parameter in main.py
and when using the OpenAI completions API to generate an output from this model, you specify the base_url
parameter as well when instantiating the client, providing http://localhost:11434/v1/
as the valueAfter installation, BabbleBeaver can be accessed at http://localhost:8000
by default. The API documentation, generated by FastAPI, is available at http://localhost:8000/docs
.
BabbleBeaver adopts a modular architecture, with each conversational AI model implemented as a separate submodule. This allows for the easy addition, removal, or replacement of LLM implementations without affecting the core system.
We welcome contributions! Please see our Contributing Guidelines for how to get involved. For setting up your development environment and the pull request process, refer to the guidelines.
BabbleBeaver is committed to fostering an inclusive and safe community. Please read our Community Guidelines to understand our values and expectations.
Our open-source project aims to create an accessible and robust conversational AI flexible software development platform.
Select a Pre-trained Model Choose a pre-trained model from Hugging Face's Transformers library that suits your needs for a conversational chatbot. Models like GPT (including GPT-2, GPT-3 if you have access through OpenAI's API, or GPT-Neo/GPT-J for fully open-source alternatives) are good starting points for building conversational agents.
Local Inference Perform all inferences locally (or on your private cloud) to keep the data isolated. This means running the model on your own hardware without sending data out to third-party AI providers. Hugging Face Transformers library allows you to easily download pre-trained models and use them locally.
Fine-tuning (Optional) If the pre-trained model doesn't meet your specific needs or you want to adapt it to your domain-specific conversations, you can fine-tune the model on your dataset. This step requires:
A custom dataset: Prepare a dataset representative of the conversations your chatbot will handle. Compute resources: Fine-tuning LLMs can be resource-intensive, depending on the model size and dataset. Keeping data isolated: Ensure your training process is conducted in a secure environment to maintain data privacy.
BabbleBeaver is GPL-licensed. See the LICENSE file for details.
This project was inspired by the open-source community and the need for privacy-respecting AI tools.
For questions, suggestions, or contributions, please open an issue in this repository or contact us at team@open.build.
Copyleft: The GPL is a strong copyleft license, which means that any modified versions of the project must also be distributed under the GPL. This ensures that the main codebase and any derivatives remain open source, promoting collaboration and improvement. Compatibility with Other Licenses for Sub-Repositories: While the GPL itself is strict about the licensing of derived works, it allows for linking or interfacing with software under different licenses under certain conditions.