karthink / gptel

A simple LLM client for Emacs
GNU General Public License v3.0
1.64k stars 151 forks source link

Add o1-preview support (or how to hack it in) #393

Closed johannesCmayer closed 1 week ago

johannesCmayer commented 2 months ago

Currently, I can't do

(gptel-make-openai "custom1" :models '("o1-preview") :key gptel-api-key)

The problem I am running into so far is that o1-preview does not support system messages yet. And it seems gptel always tries to send system messages. Could there be an option not to send a system message (or what is the option if it exists)?

Alternatively, how can I hack it so that it works? Some pointers would be helpful.

karthink commented 2 months ago

There is an open PR to allow disabling the system message, see #339.

ileixe commented 1 month ago
gptel$ git diff HEAD^1
diff --git a/gptel-openai.el b/gptel-openai.el
index beb252b..a18a61e 100644
--- a/gptel-openai.el
+++ b/gptel-openai.el
@@ -151,7 +151,7 @@ with differing settings.")
                            (regexp-quote (gptel-response-prefix-string)))))
             prompts)
       (and max-entries (cl-decf max-entries)))
-    (cons (list :role "system"
+    (cons (list :role "user"
                 :content gptel--system-message)
           prompts)))

diff --git a/gptel.el b/gptel.el
index 88892eb..694597a 100644
--- a/gptel.el
+++ b/gptel.el
@@ -419,12 +419,16 @@ The current options for ChatGPT are
 - \"gpt-4-turbo-preview\"
 - \"gpt-4-32k\"
 - \"gpt-4-1106-preview\"
+- \"o1-preview\"
+- \"o1-mini\"

 To set the model for a chat session interactively call
 `gptel-send' with a prefix argument."
   :safe #'always
   :type '(choice
           (string :tag "Specify model name")
+          (const :tag "o1 (preview)" "o1-preview")
+          (const :tag "o1 mini" "o1-mini")
           (const :tag "GPT 4 omni mini" "gpt-4o-mini")
           (const :tag "GPT 3.5 turbo" "gpt-3.5-turbo")
           (const :tag "GPT 3.5 turbo 16k" "gpt-3.5-turbo-16k")
@@ -455,7 +459,8 @@ To set the temperature for a chat session interactively call
    :stream t
    :models '("gpt-3.5-turbo" "gpt-3.5-turbo-16k" "gpt-4o-mini"
              "gpt-4" "gpt-4o" "gpt-4-turbo" "gpt-4-turbo-preview"
-             "gpt-4-32k" "gpt-4-1106-preview" "gpt-4-0125-preview")))
+             "gpt-4-32k" "gpt-4-1106-preview" "gpt-4-0125-preview"
+             "o1-preview" "o1-mini")))

 (defcustom gptel-backend gptel--openai
   "LLM backend to use.

For those who are willing to try hack, I see it's working with this changes.

Inkbottle007 commented 2 weeks ago

I did exactly what @ileixe suggested above, with adjustments for the breaking changes:

# git diff HEAD^1
diff --git a/gptel-openai.el b/gptel-openai.el
index 61339b3..a4ac6ef 100644
--- a/gptel-openai.el
+++ b/gptel-openai.el
@@ -181,7 +181,8 @@ with differing settings.")
                   :content
                   (gptel--trim-prefixes (buffer-substring-no-properties (point-min) (point-max))))
             prompts))
-    (cons (list :role "system"
+    ;; (cons (list :role "system"
+    (cons (list :role "user"
                 :content gptel--system-message)
           prompts)))

Additionally, my init.el includes the following configuration:

(setq
 gptel-model 'o1-mini
 gptel-backend (gptel-make-openai
                   "ChatGPT"
                 :key 'gptel-api-key
                 :stream nil ;; server doesn't accept stream = true with o1-mini
                 :models '((gpt-3.5-turbo) (gpt-3.5-turbo-16k) (gpt-4o-mini)
                           (gpt-4) (gpt-4o) (gpt-4-turbo) (gpt-4-turbo-preview)
                           (gpt-4-32k) (gpt-4-1106-preview) (gpt-4-0125-preview)
                           (o1-preview) (o1-mini))))

And that's it.

karthink commented 2 weeks ago

Support for o1-preview and o1-mini has been added. Please test and let me know if it works as expected.

karthink commented 1 week ago

Closing as this is implemented. If you encounter a bug with using o1-preview or o1-mini, please reopen this thread.