LLM chat-bot infrastructure service featuring multiple bots instances with custom plugins (code/sql execution/document processing/etc.) and frontends (telegram, whatsapp, web).
Join Project's Telegram Group and share your ideas.
python3 -m virtualenv .venv
source .venv/bin/activate
pip install -r requirements.txt
pytest
Then:
@BotFather
and get a token.Create master.json
:
{
"infra": {
"debug": false,
"server": {
"port": 8080
},
"postgres": {
"enabled": false,
"host": "localhost",
"port": 5432,
"user": "postgres",
"password": "password",
"schemas": "public"
},
"influxdb": {
"enabled": false,
"url": "http://localhost:8086",
"org": "org",
"bucket": "bucket",
"token": "token123123token"
}
},
"bots": {
"<YOUR BOT USERNAME>": {
"bot_id": "<YOUR BOT USERNAME>",
"type": "tg_bot",
"token": "<TELEGRAM BOT TOKEN>",
"tao_bot": {
"username": "<YOUR BOT USERNAME>",
"chats": [],
"admins": [
"<YOUR USERNAME>"
],
"users": [],
"bot_mention_names": [
"tao",
"тао"
],
"control_chat_id": "-",
"messages_per_completion": 20,
"system_prompt": "./"
},
"gpt": {
"url": "https://api.openai.com/v1/chat/completions",
"type": "openai",
"token": "<YOUR OPENAI TOKEN>",
"model": "gpt-3.5",
"temperature": 0,
"max_tokens": 1000,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0
}
}
}
Note, here postgres
and influxdb
are disabled. Chat messages are saved in-memory. You can configure the services later.
All is done, fire up the platform:
python entrypoint.py
"./var"
to store all files, including tmp files, logs, system prompts, etc../var/system_prompts/<bot name>
(it is created automatically if not present).ffmpeg
for audio processinggrafana
+ influxdb
for metric collectionflyway
and postgres
database to persist chat historymaster.json
whisper
to transcribe audio messages.influxdb
that can be visualised in grafana
Copilot notebook contains code that builds a systemp prompt out of all project files. Using this prompt the GPT can help you write code components and tests.