karthink / gptel

A simple LLM client for Emacs
GNU General Public License v3.0
1.04k stars 111 forks source link

Confused about how to set the `gptel` backend #242

Closed benthamite closed 3 months ago

benthamite commented 3 months ago

I'm exploring ways to set different models in different major modes and came up with the following approach

(setq gptel-api-key (auth-source-pass-get "key" (concat "tlon/core/openai.com/" tlon-core-email-shared)))
(setq gptel-default-mode 'org-mode)

(gptel-make-gemini "Gemini"
    :stream t
    :key (auth-source-pass-get 'secret
                   (concat "tlon/core/makersuite.google.com/" tlon-core-email-shared)))

;; adapted from the `:reader' lambda of `transient-infix-set' in `gptel-transient.el'
(defun gptel-extras-model-config (globally &optional backend-name model-name)
  "Configure `gptel' for BACKEND-NAME and MODEL-NAME.
By default, configure it for the current buffer. If GLOBALLY is non-nil, or
called with a prefix argument, configure it globally."
  (interactive "P")
  (let* ((backend-name (or backend-name
               (if (<= (length gptel--known-backends) 1)
                   (caar gptel--known-backends)
                 (completing-read "Backend name: " (mapcar #'car gptel--known-backends) nil t))))
     (backend (alist-get backend-name gptel--known-backends nil nil #'equal))
     (backend-models (gptel-backend-models backend))
     (model-name (or model-name
             (if (= (length backend-models) 1)
                 (car backend-models)
               (completing-read "Model name: " backend-models))))
     (setter (if globally #'set-default #'set)))
    (funcall setter 'gptel-model model-name)
    (funcall setter 'gptel-backend backend)))

(dolist (hook '(text-mode-hook bibtex-mode-hook))
    (add-hook hook (lambda ()
             "Use Gemini in all text-related modes, broadly construed."
             (gptel-extras-model-config nil "Gemini" "gemini-pro"))))
  (add-hook 'prog-mode-hook (lambda () (gptel-extras-model-config nil "ChatGPT" "gpt-4")))

This works well, in the sense that the relevant models become active in the right modes, and I can interact with them from buffers in those modes. However, if I open a dedicated buffer via M-x gptel, I get this warning:

⛔ Warning (gptel): Preferred `gptel-model' "gemini-pro" notsupported in "ChatGPT", using "gpt-3.5-turbo" instead

The warning suggests (I think) that incompatible values are being set for gptel-model and gptel-backend, but my function sets those variables correctly. Thoughts?

karthink commented 3 months ago

What are these two things set to?

benthamite commented 3 months ago

Thanks for the quick reply.

"gpt-3.5-turbo" and "ChatGPT", respectively.

Okay, so it seems that gptel complains about the default values of gptel-model and gptel-backend, although their buffer-local values are correctly set in the gptel buffer. Is that correct?

karthink commented 3 months ago

It was checking the default value of gptel-backend vs the buffer-local value of the model, should be fixed now. Please update and test.

benthamite commented 3 months ago

Great—I confirm it’s fixed. Thank you for resolving this issue so quickly.