karthink / gptel

A simple LLM client for Emacs
GNU General Public License v3.0
1.29k stars 128 forks source link

Add support for Azure-OpenAI API #104

Closed doctorguile closed 11 months ago

doctorguile commented 1 year ago

Summary of changes in gptel-curl.el:

Summary of changes in gptel.el:

nhoffman commented 1 year ago

I'm really glad to see this PR! For what it's worth, I can confirm that it works as advertised using our Azure-OpenAI account. Happy to help test further if that would be of use.

sg-qwt commented 1 year ago

I can also attest that this PR works great for Azure-Openai's use case.

karthink commented 1 year ago

Hi, thanks for the PR!

I haven't approved it yet since I'm working on making gptel a little more modular. The idea is that support for Azure, LLama etc can then be added by specifying a "gptel backend" to dispatch on. Patching the existing curl interface like in your PR will get very messy as I add support for more services or models.

It doesn't exist yet, but this is what I have in mind:

(gptel-make-backend
 'azure
 :host "https://..."
 :local nil
 :header #'gptel--azure-make-header)

I'm looking into other models to see what the design of the gptel-backend struct needs to be.

janEbert commented 1 year ago

Hey, I saw this just a bit too late after opening PR #111. Your new backend design seems to solve that issue as well by having a protocol specified per-backend. Feel free to close #111, then, if it will be overshadowed by the new backend scheme.

karthink commented 11 months ago

I've added support for gptel-backends, which should make gptel work with most LLMs that offer REST APIs, including Azure and local LLMs like PrivateGPT and GPT4All. The design is still fluid and the integration is incomplete (no transient menu support yet), but I'd appreciate if someone with Azure access can test it.

How to use it:

karthink commented 11 months ago

Azure is now supported (along with local LLMs) directly as of v0.5.0, please refer to the README for instructions. There is also a clear path to adding support for any LLM (local or otherwise) that offers a REST API, and work is planned to support direct process interaction for ones that don't.

@doctorguile thank you for this PR, it got me thinking about designing a flexible framework.