π Embark on a exciting adventure with Loyal Elephie, your faithful AI sidekick! This project combines the power of a neat Next.js web UI and a mighty Python backend, leveraging the latest advancements in Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) to deliver a seamless and meaningful chatting experience! π
ποΈ Controllable Memory: Take control of Loyal Elephie's memory! You decide which moments to save, and you can easily edit the context as needed. It is your second-brain for episodic memory. βοΈ
π Hybrid Search: Experience the powerful combination of Chroma and BM25 for efficient searches! It's also optimized for handling date-relevant queries. π
π Secure Web Access: With a built-in login feature, only authorized users can access your AI companion, ensuring your conversations remain private and secure over the internet. π‘οΈ
π€ Streamlined LLM Agent: Loyal Elephie uses XML syntax with no function-calling required. It is also optimized for less token usage and works smoothly with great local LLMs using Llama.cpp or ExllamaV2. π¬
π (Optional) Markdown Editor Integration: Connect with online Markdown editors to view the original referred document during chats and experience real-time LLM knowledge integration after editing your notes online. π
Loyal Elephie supports both open and proprietary LLMs and embeddings serving as OpenAI compatible APIs.
Warning: This project was originally designed for Linux and compatibility with Windows or macOS has not been fully tested. If you are using Windows, I strongly recommend you to run this project in WSL.
Meta-Llama-3-70B-Instruct.Q4_K_S.gguf was used when capturing the below screenshots
With SilverBulletMd, you can edit a note on the browser and then let Loyal Elephie rememeber it!
The UI is modified from https://github.com/mckaywrigley/chatbot-ui-lite, credits to the author Mckay Wrigley!
1. Clone Repo
git clone https://github.com/v2rockets/Loyal-Elephie.git
2. Install Frontend Requirments
cd frontend
npm i
3. Configure Login Users
frontend/users.json
[{
"username":"admin",
"password":"admin"
}]
4. Install Backend Requirents
cd backend
pip install -r requirements.txt
5. Configure Backend Settings
# backend/settings.py
NICK_NAME = 'Peter' # This is your nick name. Make sure to set it at the beginning and don't change so that LLM will not get confused.
CHAT_BASE_URL = 'https://api.openai.com/v1' # Modify to your OpenAI compatible API url
CHAT_API_KEY = 'your-api-key'
CHAT_MODEL_NAME = "gpt-3.5-turbo"
# Language Preference (experimental)
# Supported Languages: English, Chinese, German, French, Spanish, Portuguese, Italian, Dutch, Czech, Polish, Russian, Arabic
LANGUAGE_PREFERENCE = "English"
6. Run App
frontend:
cd frontend
npm run build
npm run start
backend:
cd backend
python app.py
Some of the workable local LLMs tested:
For those who need hand-on local embedding API, an embedding server example is added to "external_example".