karthink / gptel

A simple LLM client for Emacs
GNU General Public License v3.0
1.04k stars 113 forks source link

`gptel-request` errors with gemini backend #153

Closed benthamite closed 6 months ago

benthamite commented 6 months ago

I have defined a custom command with gptel-request that works fine with a GPT backend, but fails when the backend is set to Gemini:

(defun my/summarize-article (string)
  (gptel-request
   (format "Please summarize the following article:\n\n%s\n\n"
       string)
   :callback
   (lambda (response info)
     (if (not response)
     (message "`gptel' failed with message: %s" (plist-get info :status))
       (kill-new response)
       (message "Copied AI-generated summary to the kill ring:\n\n%s" response)))))

My config:

(setq gptel-api-key "OPENAI-KEY")
(defun gptel-model-config (model)
    "Configure `gptel' for MODEL."
    (interactive (list (completing-read "Model: " '("gpt-4" "gemini-pro") nil t)))
    (pcase model
      ("gpt-4" (setq-default gptel-model "gpt-4"
                             gptel-backend gptel--openai))
      ("gemini-pro" (setq-default gptel-model "gemini-pro"
                                  gptel-backend
                                  (gptel-make-gemini
                                   "Gemini"
                                   :key "GEMINI-KEY"
                                   :stream t)))))

With the above code evaluated, the following generates an error:

(gptel-model-config "gemini-pro")
(my/summarize-article "test string")

The error messages are:

error in process sentinel: string-trim: Wrong type argument: stringp, nil
error in process sentinel: Wrong type argument: stringp, nil

Gemini works fine when I use gptel or gptel-menu, so this seems specific to gptel-request.

karthink commented 6 months ago

@benthamite I see the problem. It's a little tricky to fix, I will think about the best way to do it. Basically every LLM provider does things differently and I'm failing to provide a uniform interface to all of them.

In the meantime, you can get around the problem by setting :stream to nil when you define the backend:

(defun gptel-model-config (model)
    "Configure `gptel' for MODEL."
    (interactive (list (completing-read "Model: " '("gpt-4" "gemini-pro") nil t)))
    (pcase model
      ("gpt-4" (setq-default ...))
      ("gemini-pro" (setq-default ...))
      ("gemini-pro-no-stream" (setq-default gptel-model "gemini-pro"
                                  gptel-backend
                                  (gptel-make-gemini
                                   "Gemini-no-stream"
                                   :key "GEMINI-KEY"
                                   :stream nil))))) ;;Change to nil

(gptel-model-config "gemini-pro-no-stream")
(my/summarize-article "test string")

This should work. I'll update this issue after I figure out the best way to fix it, and you can get rid of "gemini-pro-no-stream".


A couple of other notes:

(defvar my/gemini (gptel-make-gemini ...))

Then use my/gemini the way you do gptel--openai.

benthamite commented 6 months ago

Thanks for the quick and detailed reply!

I confirm that it works. Thanks again!

EDIT: In case it is of interest, here’s the relevant code.

karthink commented 6 months ago

Keeping this issue open until gptel-request handles Gemini correctly.

karthink commented 6 months ago

You can remove gptel-extras-gemini-pro-no-stream-backend from your configuration now. If gptel-request still fails please re-open this issue.