emacs-openai / codegpt

Use GPT-3 inside Emacs
GNU General Public License v3.0
88 stars 12 forks source link

request to merge packages #8

Open Fuco1 opened 1 year ago

Fuco1 commented 1 year ago

Hey 😁

I started writing my own package following the release of chat completion Api couple of days ago.

When i tried to create the org it was already taken and I got here. We need to market this a lot more, the gpt integration into Emacs is a deal breaker.

I would like to collaborate on this, soon I will submit PRs with some functions from my own library.

How do you feel about EIEIO based low level library just implementing the API. with pcase/dash macros for expanding the responses. I'm thinking somethi g like lsp-mode.internals.

Cheers!

jcs090218 commented 1 year ago

Hi! 😁

When i tried to create the org it was already taken and I got here. We need to market this a lot more, the gpt integration into Emacs is a deal breaker.

Oops, sorry. I can add you to the org if you want?

How do you feel about EIEIO based low level library just implementing the API. with pcase/dash macros for expanding the responses. I'm thinking somethi g like lsp-mode.internals.

I haven't tried EIEO in Emacs ecosystem yet, but I would like how it works!

Fuco1 commented 1 year ago

I find EIEIO nice for APIs because you can define the interfaces as "classes" which makes it a bit more discoverable.

For example, this is what I cooked up for the new chat/completion endpoint:

You have some base class with common properties and then various requests and some method that can convert the objects into JSON (openapi-request-serialize). It's very easy to add additional messages, you just add the objects and the serialization function. In lsp-mode, they also somehow autogenerate pcase deconstructing macros (and dash) for the responses, so you can "drill down"

(defclass openapi-request ()
  ((content-type :initform "application/json")
   (endpoint :type string))
  :abstract t)

(defclass openapi-request-chat-completions-message ()
  ((role :initarg :role :type string)
   (content :initarg :content :type string)))

(cl-defmethod openapi-request-serialize ((this openapi-request-chat-completions-message))
  `((role . ,(oref this role)) (content . ,(oref this content))))

(defclass openapi-request-chat-completions (openapi-request)
  ((endpoint :initform "https://api.openai.com/v1/chat/completions")
   (model :initarg :model :initform "gpt-3.5-turbo")
   (messages :initarg :messages :initform nil)))

(cl-defmethod openapi-request-serialize ((this openapi-request-chat-completions))
  `((model . ,(oref this model))
    (messages . ,(apply #'vector (mapcar #'openapi-request-serialize (oref this messages))))))

Then there is one universal function to call the API (you pass it the request)

(cl-defgeneric openapi-request ((this openapi-client) (req openapi-request))
  (let* ((prog-timer (run-with-timer 0 0.1 (openai-thinking)))
         (process
          (plz 'post (oref req endpoint)
            :headers `(("Authorization" . ,(format "Bearer %s" openapi-token))
                       ("Content-Type" . ,(oref req content-type)))
            :body (json-serialize (openapi-request-serialize req)
                                  :null-object nil
                                  :false-object :json-false)
            :as #'json-read
            ;;; for now I'm using dumb alist destructuring, only works for chat completion requests.
            :then (-lambda ((&alist 'choices [(&alist 'message (&alist 'content))]))
                    (cancel-timer prog-timer)
                    (with-current-buffer (get-buffer-create "*openapi*")
                      (erase-buffer)
                      (insert content)
                      (markdown-mode)
                      (if (<= (count-lines (point-min) (point-max)) 1)
                          (message (buffer-string))
                        (pop-to-buffer (current-buffer)))))
            :else (lambda (&rest x) (cancel-timer prog-timer)))))))

I'm using plz instead of request, I find it very nice and light library, plus it uses curl instead of built-in emacs processes so it's much faster (and provides really cool concurrency features when you need to make hundreds of requests, like for mass embeddings).

I'll go over your code in more detail and see how it's done. I feel like a low-level-ish SDK would be immensly useful for people to quickly start hacking on their own cool plugins. But also we can provide a comprehensive package (like codegpt, i.e. this package) with pre-built cool features.

jcs090218 commented 1 year ago

I never tried plz and EIEIO, so I don't have too many thoughts on it! 😓

The intention of the upstream openai.el is indeed to provide low-level-ish SDK. Feel free to open issues or PRs; I'm always open to collaborating. :D