Are you sick of skimming through tons of Telegram messages daily looking for the valuable info? The salvation is here!
This repository hosts an implementation of a Telegram application which monitors and summarizes group chats. Initially created for personal usage, it's intended for people who need to gather information from one or several live massive Telegram group chats which generate way too many messages to be reviewed manually.
Based on the given configurations it:
api_id
and api_hash
values for the Telegram API
using this guide.gpt-4-turbo-preview
model), but it's pretty easy to replace it with the backend of your choice as it's
used via the LangChain library calls.examples/
folder as a reference.config.json
file:{
"telegram_api_id": <api_id>,
"telegram_api_hash": "<api_hash>",
"openai_api_key": "<key>",
"telegram_bot_auth_token": "<token>",
"chats_to_summarize": [
{
"id": "<group chat ID or name>",
"lookback_period_seconds": 86400,
"summarization_prompt_path": "prompts/example_summarization_prompt.txt"
}
],
"telegram_summary_receivers": [
"<Telegram username>"
]
}
python3 -m pip install -r requirements.txt
or
docker build -t tcsa:latest .
python3 app.py config.json
or
docker run -it tcsa:latest
At the first run the app will ask you to log in to the used Telegram account, like this:
user@pc:~/telegram-chat-summarizer $ python3 app.py config.json
2024-03-27 23:03:11,618 - INFO - Started!
Please enter your phone (or bot token): <phone number>
Please enter the code you received: <OTP>
Please enter your password:
Then the session will be stored on the disk, and the subsequent runs won't require authentication.
Once the app is up and running, each summary subscriber needs to send the /verify
message to the bot so that it can
register the user.
The bot can switch conversation context by being provided with the command /<summarized chat name>
(the chat name can
be any of the ones defined in the config). This mechanism is used if you have more than one chat being summarized: by
giving the corresponding command you can switch the LLM context to a different chat and discuss that chat's summary.
The implementation is very simplistic, and there is definitely a room for improvement. Some immediate nice-to-haves (PRs are welcome!):
There is a step-by-step guide on Habr (RU) written after this implementation.