Open lambdaofgod opened 1 year ago
Yes, fully on-board with this. Using other APIs and even more locally run models is a goal of this project. Feel free to make changes as you see fit. Together with #27 it might make sense to work against a branch so that multiple changes / PRs can be funneled through this and then eventually merged into master. This should avoid any breaking changes for people using the MELPA package (the MELPA package picks it's builds from the master branch).
I've invited you to this repo so you should have commit access.
Thanks man! One more question: I'm kinda new to writing Emacs packages, are there resources that you suggest to read about code style/conventions you'd prefer for this project? Like Google Python code style guide but for elisp.
Or maybe you're thinking about refactoring this in say version 1.0. Anyway, I want to avoid making noob mistakes and stepping on someone else's toes :)
Well, publishing to Melpa means to match their coding style guidelines which are somewhat strict. There is https://github.com/riscy/melpazoid which will lint a package and you can run it for org-ai with something like
RECIPE='(org-ai :repo "rksm/org-ai" :fetcher github :files (:defaults "snippets"))' \
LOCAL_REPO='path/to/org-ai' make
Internally it runs https://www.emacswiki.org/emacs/CheckDoc and https://github.com/purcell/package-lint which you can also directly use inside emacs.
I noticed this, and should point out that I have contributed the llm
package to GNU ELPA. You can see the source here: https://github.com/ahyatt/llm. This should do what you would like, you just have to use it. If you see that something is missing, please do file a feature request.
TL;DR
Currently org-ai uses OpenAI API. In some cases someone might use different API or even local LLM.
Context
There are many interesting open-source models that can be deployed on consumer GPUs like Alpaca. Given the current pace it is likely that in a few months people with smallest 30xx and 40xx GPUs will be able to run models that will have capabilities of current GPT-3.5 or 4. I've been personally able to run Alpaca and RWKV on my RTX 3090.
What needs to be done
Abstract over LLM API. I'm pretty confident that just replacing authentication functions will suffice. We can just assume that someone calls an API that has exactly the same interface as OpenAI for completion but with custom authentication.
I can do this part but I will need to coordinate with authors so the changes are backward-compatible. You can assign me to this issue.
Problems