snakeying / GPT-Telegram-Worker

A multi-model AI Telegram bot powered by Cloudflare Workers, supporting various APIs including OpenAI, Claude, and Azure. Developed in TypeScript with a modular design for easy expansion.ๅŸบไบŽ Cloudflare Workers ็š„ๅคšๆจกๅž‹ AI Telegram ๆœบๅ™จไบบ๏ผŒๆ”ฏๆŒ OpenAIใ€Claudeใ€Azure ็ญ‰ๅคšไธช API๏ผŒ้‡‡็”จ TypeScript ๅผ€ๅ‘๏ผŒๆจกๅ—ๅŒ–่ฎพ่ฎกไพฟไบŽๆ‰ฉๅฑ•ใ€‚
159 stars 23 forks source link
azure-api claude-ai claude-api cloudflare cloudflare-worker gemini-api groq-api openai openai-api telegram-bot

๐Ÿค–๐Ÿ’ฌ Telegram GPT Worker - Multifunctional AI Assistant

English | ็ฎ€ไฝ“ไธญๆ–‡ | ็น้ซ”ไธญๆ–‡ | ๆ—ฅๆœฌ่ชž | Espaรฑol | Franรงais | ะ ัƒััะบะธะน | Deutsch

๐Ÿ“– Project Overview

Welcome to Telegram GPT Worker! ๐Ÿ‘‹ This is an efficient Telegram bot developed in TypeScript. It supports multiple languages and AI models, deployed on Cloudflare Workers, providing users with a fast and scalable service experience.

๐ŸŒŸ Key Features

  1. ๐Ÿง  Multi-model Support: Integrates multiple AI models including OpenAI, Google Gemini, Anthropic Claude, Groq, and Azure OpenAI.
  2. ๐Ÿ”— OpenAI Compatible Model Support: Designed specifically for AI model interface management and distribution systems such as One API and New API, supporting automatic model list retrieval.
  3. ๐Ÿ’ฌ Intelligent Conversation: Equipped with context memory capability, ensuring smooth and natural dialogues.
  4. ๐ŸŽจ Image Generation: Supports text-to-image generation using DALLยทE and Cloudflare Flux technologies.
  5. ๐Ÿ–ผ๏ธ Image Analysis: Allows users to upload images for intelligent analysis, using either OpenAI or Google Gemini models.
  6. ๐ŸŒ Multilingual Support: Built-in i18n functionality, supporting 8 languages to meet diverse needs.
  7. ๐Ÿ”’ User Permission Management: Controls access through whitelist functionality, enhancing security.
  8. โ˜๏ธ High-performance Deployment: Utilizes Cloudflare Workers' edge computing capabilities for rapid response.
  9. ๐Ÿ—„๏ธ Efficient Data Management: Uses Redis for data caching and management, ensuring efficient processing.
  10. ๐Ÿ”ง Flux Prompt Optimization: Optional feature that optimizes image generation prompts for Flux model through an external API.

๐Ÿ“‹ System Requirements

Before getting started, please ensure you have the following:

๐Ÿš€ Quick Start

  1. Clone the project repository
  2. Configure necessary environment variables
  3. Deploy to Cloudflare Workers
  4. Set up Telegram Webhook

For detailed deployment steps, please refer to the tutorial below.

๐Ÿ“ Available Commands

๐Ÿ“ Project Structure

/GPT-Telegram-Worker
โ”‚
โ”œโ”€โ”€ /src
โ”‚   โ”œโ”€โ”€ /api
โ”‚   โ”‚   โ”œโ”€โ”€ azure.ts               # Handle Azure API interactions
โ”‚   โ”‚   โ”œโ”€โ”€ claude.ts              # Handle Claude API interactions
โ”‚   โ”‚   โ”œโ”€โ”€ flux-cf.ts             # Handle Cloudflare AI drawing interface
โ”‚   โ”‚   โ”œโ”€โ”€ gemini.ts              # Handle Google Gemini API interactions
โ”‚   โ”‚   โ”œโ”€โ”€ groq.ts                # Handle Groq API interactions
โ”‚   โ”‚   โ”œโ”€โ”€ image_generation.ts    # Handle DALLยทE drawing interface
โ”‚   โ”‚   โ”œโ”€โ”€ model_api_interface.ts # Common interface defining model API standard structure
โ”‚   โ”‚   โ”œโ”€โ”€ openai_api.ts          # Handle OpenAI API interactions
โ”‚   โ”‚   โ”œโ”€โ”€ openai_compatible.ts   # Handles OpenAI compatible API interactions
โ”‚   โ”‚   โ””โ”€โ”€ telegram.ts            # Handle Telegram bot logic
โ”‚   โ”œโ”€โ”€ /config                    # Configuration files
โ”‚   โ”‚   โ””โ”€โ”€ commands.ts            # Telegram bot commands
โ”‚   โ”œโ”€โ”€ /utils
โ”‚   โ”‚   โ””โ”€โ”€ helpers.ts             # Utility functions and tools
โ”‚   โ”‚   โ””โ”€โ”€ i18n.ts                # Multilingual functions
โ”‚   โ”‚   โ””โ”€โ”€ redis.ts               # Upstash Redis functions
โ”‚   โ”‚   โ””โ”€โ”€ image_analyze.ts       # Image upload functions
โ”‚   โ”œโ”€โ”€ index.ts                   # Entry file, handling requests and responses
โ”‚   โ””โ”€โ”€ env.ts                     # Configure environment variables
โ”œโ”€โ”€ /types                         # Type definition files
โ”‚   โ””โ”€โ”€ telegram.d.ts              # Type definitions for Telegram API
โ”œโ”€โ”€ wrangler.toml                  # Cloudflare Worker configuration file
โ”œโ”€โ”€ tsconfig.json                  # TypeScript configuration file
โ”œโ”€โ”€ package.json                   # Project dependency file
โ””โ”€โ”€ README.md                      # Project documentation

๐Ÿš€ Detailed Tutorial

Deploying to Cloudflare Workers

Using Wrangler CLI

  1. Install Wrangler CLI:

    npm install -g @cloudflare/wrangler
  2. Log in to your Cloudflare account:

    wrangler login
  3. Create a new Workers project:

    wrangler init telegram-bot
  4. Copy the dist/index.js file into your project.

  5. Edit the wrangler.toml file to configure your project:

    name = "telegram-bot"
    type = "javascript"
    account_id = "your_account_id"
    workers_dev = true
  6. Deploy to Cloudflare Workers:

    wrangler publish

Using Cloudflare Dashboard

  1. Log in to the Cloudflare Dashboard.
  2. Select "Workers & Pages".
  3. Click "Create application" and choose "Create Worker".
  4. Name your Worker and click "Deploy".
  5. Copy and paste the contents of dist/index.js into the editor, then save the file.
  6. Add necessary environment variables in "Settings".

Configuring Telegram Webhook

Use the Telegram Bot API to set up the Webhook. URL example:

https://api.telegram.org/bot<YOUR_BOT_TOKEN>/setWebhook?url=https://your-worker.your-subdomain.workers.dev/webhook
https://api.telegram.org/bot123456789:abcdefghijklmn/setWebhook?url=https://gpt-telegram-worker.abcdefg.workers.dev/webhook

Local Development

  1. Clone the project:

    git clone https://github.com/snakeying/telegram-bot.git
  2. Install dependencies:

    npm install
  3. Set up environment variables.

  4. Compile TypeScript:

    npm run build
  5. Start the bot:

    npm start

๐Ÿ”ง Environment Variables

Variable Name Description Default Value Example
OPENAI_API_KEY OpenAI API key - sk-abcdefghijklmnopqrstuvwxyz123456
OPENAI_BASE_URL OpenAI API base URL https://api.openai.com/v1 https://your-custom-endpoint.com/v1
OPENAI_MODELS List of available OpenAI models - gpt-3.5-turbo,gpt-4
TELEGRAM_BOT_TOKEN Telegram bot token - 123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11
WHITELISTED_USERS List of user IDs allowed to use the bot - 12345678,87654321
SYSTEM_INIT_MESSAGE System initialization message You are a helpful assistant. You are a helpful assistant.
SYSTEM_INIT_MESSAGE_ROLE System initialization message role system system
DEFAULT_MODEL Default AI model to use - gpt-3.5-turbo
UPSTASH_REDIS_REST_URL Upstash Redis REST URL - https://your-redis-url.upstash.io
UPSTASH_REDIS_REST_TOKEN Upstash Redis REST token - your-redis-token
DALL_E_MODEL DALL-E model version dall-e-3 dall-e-3
CLOUDFLARE_API_TOKEN Cloudflare API token - your-cloudflare-api-token
CLOUDFLARE_ACCOUNT_ID Cloudflare account ID - your-cloudflare-account-id
FLUX_STEPS Number of Flux generation steps 4 4-8, maximum steps is 8
PROMPT_OPTIMIZATION Enable prompt optimization false true
EXTERNAL_API_BASE External API base URL - https://external-api.com
EXTERNAL_MODEL External model name - external-model-name
EXTERNAL_API_KEY External API key - external-api-key
GOOGLE_MODEL_KEY Google AI model API key - your-google-api-key
GOOGLE_MODEL_BASEURL Google AI model API base URL https://generativelanguage.googleapis.com/v1beta https://your-custom-google-endpoint.com
GOOGLE_MODELS List of available Google AI models - gemini-pro,gemini-pro-vision
GROQ_API_KEY Groq API key - your-groq-api-key
ANTHROPIC_API_KEY Anthropic API key - your-anthropic-api-key
ANTHROPIC_BASE_URL Anthropic API base URL https://api.anthropic.com https://your-custom-anthropic-endpoint.com
OPENAI_COMPATIBLE_KEY OpenAI Compatible API Key - sk-abcdefghijklmnopqrstuvwxyz123456
OPENAI_COMPATIBLE_URL OpenAI Compatible API Base URL - https://your-custom-endpoint.com/v1

Note: Some variables need to be manually configured and have no default values.

๐Ÿš€ Image Analysis Feature

Allows users to upload images and receive AI analysis results. Here's how to use it:

  1. User sends an image to the bot.
  2. In the image caption, add an analysis prompt, e.g., "Please analyze this image".
  3. The bot will use the currently selected AI model (OpenAI or Google Gemini) to analyze the image.
  4. The analysis result will be returned to the user as a text message.

Note: Make sure the AI model you're using supports image analysis. If the current model doesn't support it, the bot will prompt you to switch to a multimodal-supporting model.

๐Ÿš€ Flux Prompt Optimization

When the PROMPT_OPTIMIZATION environment variable is set to true, the Flux image generation feature uses an external API to optimize prompts. This feature works through the following steps:

  1. User provides the original prompt.
  2. The system uses the external API configured with EXTERNAL_API_BASE, EXTERNAL_MODEL, and EXTERNAL_API_KEY to optimize the prompt.
  3. The optimized prompt is used by the Flux model to generate the image.

This feature can help generate more precise images that better align with Flux model characteristics. To use this feature, make sure all relevant environment variables are correctly configured.

โš ๏ธ Important Notes

  1. ๐Ÿšฆ Use API Quotas Responsibly: Be mindful of usage limits, especially for image generation and analysis services.
  2. ๐Ÿ” Protect Sensitive Information: Safeguard your environment variables and API keys.
  3. ๐Ÿง  Understand Model Characteristics: Choose the AI model that best fits your application scenario.
  4. ๐Ÿ”„ Stay Updated: Regularly update code and features for optimal performance.
  5. ๐Ÿ›ก๏ธ Security First: Update API keys periodically and follow the principle of least privilege.
  6. ๐ŸŽจ Flux Prompt Optimization: When enabling PROMPT_OPTIMIZATION, ensure correct configuration of EXTERNAL_API_BASE, EXTERNAL_MODEL, and EXTERNAL_API_KEY.
  7. โ›” Important Notice: To avoid potential conflicts, it is not recommended to add models already used by other APIs in OpenAI Compatible. For instance, if you have configured the Gemini API and selected the gemini-1.5-flash model, you should not add the same model in OpenAI Compatible.

๐Ÿ”ง Troubleshooting

๐Ÿ“„ License

This project is licensed under the MIT License.

Copyright (c) 2024 [snakeying]