llmcord lets you (and your friends) chat with LLMs directly in Discord. It works with practically any LLM, remote or locally hosted.
Just @ the bot to start a conversation and reply to continue. Build conversations with reply chains!
You can do things like:
Additionally:
llmcord supports remote models from:
Or run a local model with:
Or use any other OpenAI compatible API server.
Clone the repo:
git clone https://github.com/jakobdylanc/llmcord
Create a copy of "config-example.yaml" named "config.yaml" and set it up:
Setting | Description |
---|---|
bot_token | Create a new Discord bot at discord.com/developers/applications and generate a token under the "Bot" tab. Also enable "MESSAGE CONTENT INTENT". |
client_id | Found under the "OAuth2" tab of the Discord bot you just made. |
status_message | Set a custom message that displays on the bot's Discord profile. Max 128 characters. |
allowed_channel_ids | A list of Discord channel IDs where the bot can be used. Leave empty to allow all channels. |
allowed_role_ids | A list of Discord role IDs that can use the bot. Leave empty to allow everyone. Specifying at least one role also disables DMs. |
max_text | The maximum amount of text allowed in a single message, including text from file attachments. (Default: 100,000 ) |
max_images | The maximum number of image attachments allowed in a single message. Only applicable when using a vision model. (Default: 5 ) |
max_messages | The maximum number of messages allowed in a reply chain. (Default: 25 ) |
use_plain_responses | When set to true the bot will use plaintext responses instead of embeds. Also, streamed responses and warning messages will be disabled.(Default: false ) |
Setting | Description |
---|---|
providers | Add the LLM providers you want to use, each with a base_url and optional api_key entry. Common providers (openai , ollama , etc.) are already included. Only supports OpenAI compatible APIs. |
model | Set to <provider name>/<model name> , e.g:- openai/gpt-4o - ollama/llama3.2 - openrouter/anthropic/claude-3.5-sonnet |
extra_api_parameters | Extra API parameters for your LLM. Add more entries as needed. (Default: max_tokens=4096, temperature=1.0 ) |
system_prompt | Write anything you want to customize the bot's behavior! Leave blank for no system prompt. |
Run the bot:
No Docker:
python -m pip install -U discord.py httpx openai pyyaml
python llmcord.py
With Docker:
docker compose up
If you're having issues, try my suggestions here
Only models from OpenAI API and xAI API are "user identity aware" because only they support the "name" parameter in the message object. Hopefully more providers support this in the future.
PRs are welcome :)