EDIT: These instructions are obsolete as of v0.5.0, which adds support for GPT4All, Ollama and Azure. Please refer to the README instead.
I've added support for gptel-backends, which should make gptel work with most LLMs that offer REST APIs, including Azure and local LLMs like PrivateGPT and GPT4All. The design is still fluid and the integration is incomplete (no transient menu support yet), but I'd appreciate if someone with access to other LLMs can test it.
EDIT: These instructions are obsolete as of v0.5.0, which adds support for GPT4All, Ollama and Azure. Please refer to the README instead.
I've added support for
gptel-backend
s, which should make gptel work with most LLMs that offer REST APIs, including Azure and local LLMs like PrivateGPT and GPT4All. The design is still fluid and the integration is incomplete (no transient menu support yet), but I'd appreciate if someone with access to other LLMs can test it.How to use it:
Clone the
multi-llm
branch of gptel.Define a
gptel-backend
like this:For GPT4All:
There are more examples in
gptel-backends.el
.Set this as the default backend, or set it locally in a gptel buffer:
OR in any chat buffer:
Use gptel as usual.