karthink / gptel

A simple LLM client for Emacs
GNU General Public License v3.0
1.03k stars 111 forks source link
chatgpt emacs llms org-mode

+title: gptel: A simple LLM client for Emacs

[[https://elpa.nongnu.org/nongnu/gptel.svg][file:https://elpa.nongnu.org/nongnu/gptel.svg]] [[https://stable.melpa.org/packages/gptel-badge.svg][file:https://stable.melpa.org/packages/gptel-badge.svg]] [[https://melpa.org/#/gptel][file:https://melpa.org/packages/gptel-badge.svg]]

gptel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and uniformly in any buffer.

+html:

| LLM Backend | Supports | Requires | |--------------------+----------+----------------------------| | ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API key]] | | Azure | ✓ | Deployment and API key | | Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] | | GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running locally]] | | Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API key]] | | Llama.cpp | ✓ | [[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp running locally]] | | Llamafile | ✓ | [[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile server]] | | Kagi FastGPT | ✓ | [[https://kagi.com/settings?p=api][API key]] | | Kagi Summarizer | ✓ | [[https://kagi.com/settings?p=api][API key]] | | together.ai | ✓ | [[https://api.together.xyz/settings/api-keys][API key]] | | Anyscale | ✓ | [[https://docs.endpoints.anyscale.com/][API key]] | | Perplexity | ✓ | [[https://docs.perplexity.ai/docs/getting-started][API key]] | | Anthropic (Claude) | ✓ | [[https://www.anthropic.com/api][API key]] | | Groq | ✓ | [[https://console.groq.com/keys][API key]] | | OpenRouter | ✓ | [[https://openrouter.ai/keys][API key]] | | PrivateGPT | ✓ | [[https://github.com/zylon-ai/private-gpt#-documentation][PrivateGPT running locally]] | | DeepSeek | ✓ | [[https://platform.deepseek.com/api_keys][API key]] |

+html:

General usage: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube Demo]])

https://user-images.githubusercontent.com/8607532/230516812-86510a09-a2fb-4cbd-b53f-cc2522d05a13.mp4

https://user-images.githubusercontent.com/8607532/230516816-ae4a613a-4d01-4073-ad3f-b66fa73c6e45.mp4

Multi-LLM support demo:

https://github-production-user-asset-6210df.s3.amazonaws.com/8607532/278854024-ae1336c4-5b87-41f2-83e9-e415349d6a43.mp4

gptel uses Curl if available, but falls back to url-retrieve to work without external dependencies.

** Contents :toc:

** Installation

gptel can be installed in Emacs out of the box with =M-x package-install= ⏎ =gptel=. This installs the latest release.

If you want the development version instead, add MELPA (or NonGNU-devel ELPA) to your list of sources, then install it with =M-x package-install⏎= =gptel=.

(Optional: Install =markdown-mode=.)

+html:

**** Straight

+html:

+begin_src emacs-lisp

(straight-use-package 'gptel)

+end_src

Installing the =markdown-mode= package is optional.

+html:

+html:

**** Manual

+html:

Clone or download this repository and run =M-x package-install-file⏎= on the repository directory.

Installing the =markdown-mode= package is optional.

+html:

+html:

**** Doom Emacs

+html:

In =packages.el=

+begin_src emacs-lisp

(package! gptel)

+end_src

In =config.el=

+begin_src emacs-lisp

(use-package! gptel :config (setq! gptel-api-key "your key"))

+end_src

"your key" can be the API key itself, or (safer) a function that returns the key. Setting =gptel-api-key= is optional, you will be asked for a key if it's not found.

+html:

+html:

**** Spacemacs

+html:

After installation with =M-x package-install⏎= =gptel=

Optional: Set =gptel-api-key= to the key. Alternatively, you may choose a more secure method such as:

*** Other LLM backends

+html:

**** Azure

+html:

Register a backend with

+begin_src emacs-lisp

(gptel-make-azure "Azure-1" ;Name, whatever you'd like :protocol "https" ;Optional -- https is the default :host "YOUR_RESOURCE_NAME.openai.azure.com" :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" ;or equivalent :stream t ;Enable streaming responses :key #'gptel-api-key :models '("gpt-3.5-turbo" "gpt-4"))

+end_src

Refer to the documentation of =gptel-make-azure= to set more parameters.

You can pick this backend from the menu when using gptel. (see [[#usage][Usage]]).

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-model "gpt-3.5-turbo" gptel-backend (gptel-make-azure "Azure-1" :protocol "https" :host "YOUR_RESOURCE_NAME.openai.azure.com" :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" :stream t :key #'gptel-api-key :models '("gpt-3.5-turbo" "gpt-4")))

+end_src

+html:

+html:

**** GPT4All

+html:

Register a backend with

+begin_src emacs-lisp

(gptel-make-gpt4all "GPT4All" ;Name of your choosing :protocol "http" :host "localhost:4891" ;Where it's running :models '("mistral-7b-openorca.Q4_0.gguf")) ;Available models

+end_src

These are the required parameters, refer to the documentation of =gptel-make-gpt4all= for more.

You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. Additionally you may want to increase the response token size since GPT4All uses very short (often truncated) responses by default.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-max-tokens 500 gptel-model "mistral-7b-openorca.Q4_0.gguf" gptel-backend (gptel-make-gpt4all "GPT4All" :protocol "http" :host "localhost:4891" :models '("mistral-7b-openorca.Q4_0.gguf")))

+end_src

+html:

+html:

**** Ollama

+html:

Register a backend with

+begin_src emacs-lisp

(gptel-make-ollama "Ollama" ;Any name of your choosing :host "localhost:11434" ;Where it's running :stream t ;Stream responses :models '("mistral:latest")) ;List of models

+end_src

These are the required parameters, refer to the documentation of =gptel-make-ollama= for more.

You can pick this backend from the menu when using gptel (see [[#usage][Usage]])

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-model "mistral:latest" gptel-backend (gptel-make-ollama "Ollama" :host "localhost:11434" :stream t :models '("mistral:latest")))

+end_src

+html:

+html:

**** Gemini

+html:

Register a backend with

+begin_src emacs-lisp

;; :key can be a function that returns the API key. (gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t)

+end_src

These are the required parameters, refer to the documentation of =gptel-make-gemini= for more.

You can pick this backend from the menu when using gptel (see [[#usage][Usage]])

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-model "gemini-pro" gptel-backend (gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t))

+end_src

+html:

+html:

+html:

**** Llama.cpp or Llamafile

+html:

(If using a llamafile, run a [[https://github.com/Mozilla-Ocho/llamafile#other-example-llamafiles][server llamafile]] instead of a "command-line llamafile", and a model that supports text generation.)

Register a backend with

+begin_src emacs-lisp

;; Llama.cpp offers an OpenAI compatible API (gptel-make-openai "llama-cpp" ;Any name :stream t ;Stream responses :protocol "http" :host "localhost:8000" ;Llama.cpp server location :models '("test")) ;Any names, doesn't matter for Llama

+end_src

These are the required parameters, refer to the documentation of =gptel-make-openai= for more.

You can pick this backend from the menu when using gptel (see [[#usage][Usage]])

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-model "test" gptel-backend (gptel-make-openai "llama-cpp" :stream t :protocol "http" :host "localhost:8000" :models '("test")))

+end_src

+html:

+html:

**** Kagi (FastGPT & Summarizer)

+html:

Kagi's FastGPT model and the Universal Summarizer are both supported. A couple of notes:

  1. Universal Summarizer: If there is a URL at point, the summarizer will summarize the contents of the URL. Otherwise the context sent to the model is the same as always: the buffer text upto point, or the contents of the region if the region is active.

  2. Kagi models do not support multi-turn conversations, interactions are "one-shot". They also do not support streaming responses.

Register a backend with

+begin_src emacs-lisp

(gptel-make-kagi "Kagi" ;any name :key "YOUR_KAGI_API_KEY") ;can be a function that returns the key

+end_src

These are the required parameters, refer to the documentation of =gptel-make-kagi= for more.

You can pick this backend and the model (fastgpt/summarizer) from the transient menu when using gptel.

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-model "fastgpt" gptel-backend (gptel-make-kagi "Kagi" :key "YOUR_KAGI_API_KEY"))

+end_src

The alternatives to =fastgpt= include =summarize:cecil=, =summarize:agnes=, =summarize:daphne= and =summarize:muriel=. The difference between the summarizer engines is [[https://help.kagi.com/kagi/api/summarizer.html#summarization-engines][documented here]].

+html:

+html:

**** together.ai

+html:

Register a backend with

+begin_src emacs-lisp

;; Together.ai offers an OpenAI compatible API (gptel-make-openai "TogetherAI" ;Any name you want :host "api.together.xyz" :key "your-api-key" ;can be a function that returns the key :stream t :models '(;; has many more, check together.ai "mistralai/Mixtral-8x7B-Instruct-v0.1" "codellama/CodeLlama-13b-Instruct-hf" "codellama/CodeLlama-34b-Instruct-hf"))

+end_src

You can pick this backend from the menu when using gptel (see [[#usage][Usage]])

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-model "mistralai/Mixtral-8x7B-Instruct-v0.1" gptel-backend (gptel-make-openai "TogetherAI"
:host "api.together.xyz" :key "your-api-key"
:stream t :models '(;; has many more, check together.ai "mistralai/Mixtral-8x7B-Instruct-v0.1" "codellama/CodeLlama-13b-Instruct-hf" "codellama/CodeLlama-34b-Instruct-hf")))

+end_src

+html:

+html:

**** Anyscale

+html:

Register a backend with

+begin_src emacs-lisp

;; Anyscale offers an OpenAI compatible API (gptel-make-openai "Anyscale" ;Any name you want :host "api.endpoints.anyscale.com" :key "your-api-key" ;can be a function that returns the key :models '(;; has many more, check anyscale "mistralai/Mixtral-8x7B-Instruct-v0.1"))

+end_src

You can pick this backend from the menu when using gptel (see [[#usage][Usage]])

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-model "mistralai/Mixtral-8x7B-Instruct-v0.1" gptel-backend (gptel-make-openai "Anyscale" :host "api.endpoints.anyscale.com" :key "your-api-key" :models '(;; has many more, check anyscale "mistralai/Mixtral-8x7B-Instruct-v0.1")))

+end_src

+html:

+html:

**** Perplexity

+html:

Register a backend with

+begin_src emacs-lisp

;; Perplexity offers an OpenAI compatible API (gptel-make-openai "Perplexity" ;Any name you want :host "api.perplexity.ai" :key "your-api-key" ;can be a function that returns the key :endpoint "/chat/completions" :stream t :models '(;; has many more, check perplexity.ai "pplx-7b-chat" "pplx-70b-chat" "pplx-7b-online" "pplx-70b-online"))

+end_src

You can pick this backend from the menu when using gptel (see [[#usage][Usage]])

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-model "pplx-7b-chat" gptel-backend (gptel-make-openai "Perplexity" :host "api.perplexity.ai" :key "your-api-key" :endpoint "/chat/completions" :stream t :models '(;; has many more, check perplexity.ai "pplx-7b-chat" "pplx-70b-chat" "pplx-7b-online" "pplx-70b-online")))

+end_src

+html:

+html:

**** Anthropic (Claude)

+html:

Register a backend with

+begin_src emacs-lisp

(gptel-make-anthropic "Claude" ;Any name you want :stream t ;Streaming responses :key "your-api-key")

+end_src

The =:key= can be a function that returns the key (more secure).

You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-model "claude-3-sonnet-20240229" ; "claude-3-opus-20240229" also available gptel-backend (gptel-make-anthropic "Claude" :stream t :key "your-api-key"))

+end_src

+html:

+html:

**** Groq

+html:

Register a backend with

+begin_src emacs-lisp

;; Groq offers an OpenAI compatible API (gptel-make-openai "Groq" ;Any name you want :host "api.groq.com" :endpoint "/openai/v1/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '("llama3-70b-8192" "llama3-8b-8192" "mixtral-8x7b-32768" "gemma-7b-it"))

+end_src

You can pick this backend from the menu when using gptel (see [[#usage][Usage]]). Note that Groq is fast enough that you could easily set =:stream nil= and still get near-instant responses.

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-model "mixtral-8x7b-32768" gptel-backend (gptel-make-openai "Groq" :host "api.groq.com" :endpoint "/openai/v1/chat/completions" :stream t :key "your-api-key" :models '("llama3-70b-8192" "llama3-8b-8192" "mixtral-8x7b-32768" "gemma-7b-it")))

+end_src

+html:

+html:

**** OpenRouter

+html:

Register a backend with

+begin_src emacs-lisp

;; OpenRouter offers an OpenAI compatible API (gptel-make-openai "OpenRouter" ;Any name you want :host "openrouter.ai" :endpoint "/api/v1/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '("openai/gpt-3.5-turbo" "mistralai/mixtral-8x7b-instruct" "meta-llama/codellama-34b-instruct" "codellama/codellama-70b-instruct" "google/palm-2-codechat-bison-32k" "google/gemini-pro"))

+end_src

You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-model "mixtral-8x7b-32768" gptel-backend (gptel-make-openai "OpenRouter" ;Any name you want :host "openrouter.ai" :endpoint "/api/v1/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '("openai/gpt-3.5-turbo" "mistralai/mixtral-8x7b-instruct" "meta-llama/codellama-34b-instruct" "codellama/codellama-70b-instruct" "google/palm-2-codechat-bison-32k" "google/gemini-pro")))

+end_src

+html:

+html:

**** PrivateGPT

+html:

Register a backend with

+begin_src emacs-lisp

(gptel-make-privategpt "privateGPT" ;Any name you want :protocol "http" :host "localhost:8001" :stream t :context t ;Use context provided by embeddings :sources t ;Return information about source documents :models '("private-gpt"))

+end_src

You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-model "private-gpt" gptel-backend (gptel-make-privategpt "privateGPT" ;Any name you want :protocol "http" :host "localhost:8001" :stream t :context t ;Use context provided by embeddings :sources t ;Return information about source documents :models '("private-gpt")))

+end_src

+html:

+html:

**** DeepSeek

+html:

Register a backend with

+begin_src emacs-lisp

;; DeepSeek offers an OpenAI compatible API (gptel-make-openai "DeepSeek" ;Any name you want :host "api.deepseek.com" :endpoint "/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '("deepseek-chat" "deepseek-coder"))

+end_src

You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).

***** (Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.

+begin_src emacs-lisp

;; OPTIONAL configuration (setq gptel-model "deepseek-chat" gptel-backend (gptel-make-openai "DeepSeek" ;Any name you want :host "api.deepseek.com" :endpoint "/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '("deepseek-chat" "deepseek-coder")))

+end_src

+html:

** Usage

(This is also a [[https://www.youtube.com/watch?v=bsRnh_brggM][video demo]] showing various uses of gptel.)

|-------------------+------------------------------------------------------------------------------------------------| | To send queries | Description | |-------------------+------------------------------------------------------------------------------------------------| | =gptel-send= | Send conversation up to =(point)=, or selection if region is active. Works anywhere in Emacs. | | =gptel= | Create a new dedicated chat buffer. Not required to use gptel. | |-------------------+------------------------------------------------------------------------------------------------|

|--------------------+---------------------------------------------------------------| | To Set options | | |--------------------+---------------------------------------------------------------| | =C-u= =gptel-send= | Transient menu for preferences, input/output redirection etc. | | =gptel-menu= | /(Same)/ | |--------------------+---------------------------------------------------------------|

|------------------+-----------------------------------------------------------------------------------------| | To add context | | |------------------+-----------------------------------------------------------------------------------------| | =gptel-add= | Add/remove a region or buffer to gptel's context. Add/remove marked files in Dired. | | =gptel-add-file= | Add a (text-readable) file to gptel's context. Also available from the transient menu. | |------------------+-----------------------------------------------------------------------------------------|

|----------------------------+----------------------------------------------------------------------------| | In Org mode only | | |----------------------------+----------------------------------------------------------------------------| | =gptel-org-set-topic= | Limit conversation context to an Org heading | | =gptel-org-set-properties= | Write gptel configuration as Org properties (for self-contained chat logs) | |----------------------------+----------------------------------------------------------------------------|

*** In any buffer:

  1. Call =M-x gptel-send= to send the text up to the cursor. The response will be inserted below. Continue the conversation by typing below the response.

  2. If a region is selected, the conversation will be limited to its contents.

  3. Call =M-x gptel-send= with a prefix argument (~C-u~)

    • to set chat parameters (GPT model, system message etc) for this buffer,
    • include quick instructions for the next request only,
    • to add additional context -- regions, buffers or files -- to gptel,
    • to read the prompt from or redirect the response elsewhere,
    • or to replace the prompt with the response.

+html: Image showing gptel's menu with some of the available query options.

With a region selected, you can also rewrite prose or refactor code from here:

Code:

[[https://user-images.githubusercontent.com/8607532/230770162-1a5a496c-ee57-4a67-9c95-d45f238544ae.png]]

Prose:

[[https://user-images.githubusercontent.com/8607532/230770352-ee6f45a3-a083-4cf0-b13c-619f7710e9ba.png]]

*** In a dedicated chat buffer:

  1. Run =M-x gptel= to start or switch to the chat buffer. It will ask you for the key if you skipped the previous step. Run it with a prefix-arg (=C-u M-x gptel=) to start a new session.

  2. In the gptel buffer, send your prompt with =M-x gptel-send=, bound to =C-c RET=.

  3. Set chat parameters (LLM provider, model, directives etc) for the session by calling =gptel-send= with a prefix argument (=C-u C-c RET=):

+html: Image showing gptel's menu with some of the available query options.

That's it. You can go back and edit previous prompts and responses if you want.

The default mode is =markdown-mode= if available, else =text-mode=. You can set =gptel-default-mode= to =org-mode= if desired.

**** Save and restore your chat sessions

Saving the file will save the state of the conversation as well. To resume the chat, open the file and turn on =gptel-mode= before editing the buffer.

*** Include more context with requests

By default, gptel will query the LLM with the active region or the buffer contents up to the cursor. Often it can be helpful to provide the LLM with additional context from outside the current buffer. For example, when you're in a chat buffer but want to ask questions about a (possibly changing) code buffer and auxiliary project files.

You can include additional text regions, buffers or files with gptel's queries. This additional context is "live" and not a snapshot. Once added, the regions, buffers or files are scanned and included at the time of each query.

You can add a selected region, buffer or file to gptel's context from the menu, or call =gptel-add=. (To add a file use =gptel-add= in Dired or use the dedicated =gptel-add-file= command.)

You can examine the active context from the menu:

+html: <img src="https://github.com/karthink/gptel/assets/8607532/63cd7fc8-6b3e-42ae-b6ca-06ff935bae9c" align="center" alt="Image showing gptel's menu with the "inspect context" command.">

And then browse through or remove context from the context buffer:

+html: Image showing gptel's context buffer.

*** Extra Org mode conveniences

gptel offers a few extra conveniences in Org mode.

** FAQ

+html:

**** I want the window to scroll automatically as the response is inserted

+html:

To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to enable auto-scrolling.

+begin_src emacs-lisp

(add-hook 'gptel-post-stream-hook 'gptel-auto-scroll)

+end_src

+html:

+html:

**** I want the cursor to move to the next prompt after the response is inserted

+html:

To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to move the cursor:

+begin_src emacs-lisp

(add-hook 'gptel-post-response-functions 'gptel-end-of-response)

+end_src

You can also call =gptel-end-of-response= as a command at any time.

+html:

+html:

**** I want to change the formatting of the prompt and LLM response

+html:

For dedicated chat buffers: customize =gptel-prompt-prefix-alist= and =gptel-response-prefix-alist=. You can set a different pair for each major-mode.

Anywhere in Emacs: Use =gptel-pre-response-hook= and =gptel-post-response-functions=, which see.

+html:

+html:

**** I want the transient menu options to be saved so I only need to set them once

+html:

Any model options you set are saved for the current buffer. But the redirection options in the menu are set for the next query only:

+html: https://github.com/karthink/gptel/assets/8607532/2ecc6be9-aa52-4287-a739-ba06e1369ec2

You can make them persistent across this Emacs session by pressing ~C-x C-s~:

+html: https://github.com/karthink/gptel/assets/8607532/b8bcb6ad-c974-41e1-9336-fdba0098a2fe

(You can also cycle through presets you've saved with ~C-x p~ and ~C-x n~.)

Now these will be enabled whenever you send a query from the transient menu. If you want to use these saved options without invoking the transient menu, you can use a keyboard macro:

+begin_src emacs-lisp

;; Replace with your key to invoke the transient menu: (keymap-global-set "" "C-u C-c ")

+end_src

Or see this [[https://github.com/karthink/gptel/wiki/Commonly-requested-features#save-transient-flags][wiki entry]].

+html:

+html:

**** I want to use gptel in a way that's not supported by =gptel-send= or the options menu

+html:

gptel's default usage pattern is simple, and will stay this way: Read input in any buffer and insert the response below it. Some custom behavior is possible with the transient menu (=C-u M-x gptel-send=).

For more programmable usage, gptel provides a general =gptel-request= function that accepts a custom prompt and a callback to act on the response. You can use this to build custom workflows not supported by =gptel-send=. See the documentation of =gptel-request=, and the [[https://github.com/karthink/gptel/wiki/Defining-custom-gptel-commands][wiki]] for examples.

+html:

+html:

**** (Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode

+html:

Doom binds ~RET~ in Org mode to =+org/dwim-at-point=, which appears to conflict with gptel's transient menu bindings for some reason.

Two solutions:

+html:

+html:

**** (ChatGPT) I get the error "(HTTP/2 429) You exceeded your current quota"

+html:

+begin_quote

(HTTP/2 429) You exceeded your current quota, please check your plan and billing details.

+end_quote

Using the ChatGPT (or any OpenAI) API requires [[https://platform.openai.com/account/billing/overview][adding credit to your account]].

+html:

+html:

**** Why another LLM client?

+html:

Other Emacs clients for LLMs prescribe the format of the interaction (a comint shell, org-babel blocks, etc). I wanted:

  1. Something that is as free-form as possible: query the model using any text in any buffer, and redirect the response as required. Using a dedicated =gptel= buffer just adds some visual flair to the interaction.
  2. Integration with org-mode, not using a walled-off org-babel block, but as regular text. This way the model can generate code blocks that I can run.

+html:

** Additional Configuration :PROPERTIES: :ID: f885adac-58a3-4eba-a6b7-91e9e7a17829 :END:

+begin_src emacs-lisp :exports none :results list

(let ((all)) (mapatoms (lambda (sym) (when (and (string-match-p "^gptel-[^-]" (symbol-name sym)) (get sym 'variable-documentation)) (push sym all)))) all)

+end_src

|----------------------+--------------------------------------------------------------------| | Connection options | | |----------------------+--------------------------------------------------------------------| | =gptel-use-curl= | Use Curl (default), fallback to Emacs' built-in =url=. | | =gptel-proxy= | Proxy server for requests, passed to curl via =--proxy=. | | =gptel-api-key= | Variable/function that returns the API key for the active backend. | |----------------------+--------------------------------------------------------------------|

|-----------------------+---------------------------------------------------------| | LLM request options | /(Note: not supported uniformly across LLMs)/ | |-----------------------+---------------------------------------------------------| | =gptel-backend= | Default LLM Backend. | | =gptel-model= | Default model to use, depends on the backend. | | =gptel-stream= | Enable streaming responses, if the backend supports it. | | =gptel-directives= | Alist of system directives, can switch on the fly. | | =gptel-max-tokens= | Maximum token count (in query + response). | | =gptel-temperature= | Randomness in response text, 0 to 2. | | =gptel-use-context= | How/whether to include additional context | |-----------------------+---------------------------------------------------------|

|-------------------------------+----------------------------------------------------------------| | Chat UI options | | |-------------------------------+----------------------------------------------------------------| | =gptel-default-mode= | Major mode for dedicated chat buffers. | | =gptel-track-response= | Distinguish between user messages and LLM responses? | | =gptel-prompt-prefix-alist= | Text inserted before queries. | | =gptel-response-prefix-alist= | Text inserted before responses. | | =gptel-use-header-line= | Display status messages in header-line (default) or minibuffer | | =gptel-display-buffer-action= | Placement of the gptel chat buffer. | |-------------------------------+----------------------------------------------------------------|

|-------------------------------+-------------------------------------------------------| | Org mode UI options | | |-------------------------------+-------------------------------------------------------| | =gptel-org-branching-context= | Make each outline path a separate conversation branch | |-------------------------------+-------------------------------------------------------|

|---------------------------------+------------------------------------------------------------| | Hooks for customization | | |---------------------------------+------------------------------------------------------------| | =gptel-pre-response-hook= | Runs before inserting the LLM response into the buffer | | =gptel-post-response-functions= | Runs after inserting the full LLM response into the buffer | | =gptel-post-stream-hook= | Runs after each streaming insertion | | =gptel-context-wrap-function= | To include additional context formatted your way | |---------------------------------+------------------------------------------------------------|

** COMMENT Will you add feature X?

Maybe, I'd like to experiment a bit more first. Features added since the inception of this package include

Features being considered or in the pipeline:

** Alternatives

Other Emacs clients for LLMs include

There are several more: [[https://github.com/CarlQLange/chatgpt-arcana.el][chatgpt-arcana]], [[https://github.com/MichaelBurge/leafy-mode][leafy-mode]], [[https://github.com/iwahbe/chat.el][chat.el]]

*** Extensions using gptel

These are packages that use gptel to provide additional functionality

** COMMENT Breaking Changes

** Acknowledgments

Local Variables:

toc-org-max-depth: 4

eval: (and (fboundp 'toc-org-mode) (toc-org-mode 1))

End: