emacs-openai / codegpt

Use GPT-3 inside Emacs
GNU General Public License v3.0
88 stars 12 forks source link

Support for GPT-4? #11

Closed benthamite closed 1 year ago

benthamite commented 1 year ago

The package works fine with codegpt-model set to the default value ("text-davinci-003"). However, it fails when I set it to "gpt-4". OpenAI granted me access to the GPT-4 models yesterday, so I assume this is an issue with the package, especially given that it also fails when I setcodegpt-model to "gpt-3.5-turbo". (See this page for a list of available models.)

If GPT-4 is not currently supported, are there plans to support it soon? Thanks.

Backtrace:

Debugger entered--Lisp error: (error "Internal error: 404")
  error("Internal error: %s" 404)
  openai--handle-error(#s(request-response :status-code 404 :history nil :data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :error-thrown (error http 404) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer [API Key]")) :data "{\"model\":\"gpt-3.5-turbo\",\"prompt\":\"How are you?\\n\\n\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x143b641711f5f9d9>) :url "https://api.openai.com/v1/completions" :response #1 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 404 \ndate: Sat, 18 Mar 2023 01:06:08 GMT\ncontent-type: application/json\ncontent-length: 227\naccess-control-allow-origin: *\nopenai-organization: [User ID]\nopenai-processing-ms: 103\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 3500\nx-ratelimit-limit-tokens: 90000\nx-ratelimit-remaining-requests: 3499\nx-ratelimit-remaining-tokens: 85999\nx-ratelimit-reset-requests: 17ms\nx-ratelimit-reset-tokens: 2.666s\nx-request-id: add53e20bfda8e445412a38d0147869b\n" :-timer nil :-backend curl))
  #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11>(:data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :symbol-status error :error-thrown (error http 404) :response #s(request-response :status-code 404 :history nil :data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :error-thrown (error http 404) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer [API Key]")) :data "{\"model\":\"gpt-3.5-turbo\",\"prompt\":\"How are you?\\n\\n\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x143b641711f5f9d9>) :url "https://api.openai.com/v1/completions" :response #8 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 404 \ndate: Sat, 18 Mar 2023 01:06:08 GMT\ncontent-type: application/json\ncontent-length: 227\naccess-control-allow-origin: *\nopenai-organization: [User ID]\nopenai-processing-ms: 103\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 3500\nx-ratelimit-limit-tokens: 90000\nx-ratelimit-remaining-requests: 3499\nx-ratelimit-remaining-tokens: 85999\nx-ratelimit-reset-requests: 17ms\nx-ratelimit-reset-tokens: 2.666s\nx-request-id: add53e20bfda8e445412a38d0147869b\n" :-timer nil :-backend curl))
  apply(#<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> (:data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :symbol-status error :error-thrown (error http 404) :response #s(request-response :status-code 404 :history nil :data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :error-thrown (error http 404) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer [API Key]")) :data "{\"model\":\"gpt-3.5-turbo\",\"prompt\":\"How are you?\\n\\n\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x143b641711f5f9d9>) :url "https://api.openai.com/v1/completions" :response #10 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 404 \ndate: Sat, 18 Mar 2023 01:06:08 GMT\ncontent-type: application/json\ncontent-length: 227\naccess-control-allow-origin: *\nopenai-organization: [User ID]\nopenai-processing-ms: 103\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 3500\nx-ratelimit-limit-tokens: 90000\nx-ratelimit-remaining-requests: 3499\nx-ratelimit-remaining-tokens: 85999\nx-ratelimit-reset-requests: 17ms\nx-ratelimit-reset-tokens: 2.666s\nx-request-id: add53e20bfda8e445412a38d0147869b\n" :-timer nil :-backend curl)))
  request--callback(#<killed buffer> :error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer [API Key]")) :data "{\"model\":\"gpt-3.5-turbo\",\"prompt\":\"How are you?\\n\\n\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x143b641711f5f9d9>) :url "https://api.openai.com/v1/completions" :response #s(request-response :status-code 404 :history nil :data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :error-thrown (error http 404) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer [API Key]")) :data "{\"model\":\"gpt-3.5-turbo\",\"prompt\":\"How are you?\\n\\n\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x143b641711f5f9d9>) :url "https://api.openai.com/v1/completions" :response #17 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 404 \ndate: Sat, 18 Mar 2023 01:06:08 GMT\ncontent-type: application/json\ncontent-length: 227\naccess-control-allow-origin: *\nopenai-organization: [User ID]\nopenai-processing-ms: 103\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 3500\nx-ratelimit-limit-tokens: 90000\nx-ratelimit-remaining-requests: 3499\nx-ratelimit-remaining-tokens: 85999\nx-ratelimit-reset-requests: 17ms\nx-ratelimit-reset-tokens: 2.666s\nx-request-id: add53e20bfda8e445412a38d0147869b\n" :-timer nil :-backend curl) :encoding utf-8)
  apply(request--callback #<killed buffer> (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer [API Key]")) :data "{\"model\":\"gpt-3.5-turbo\",\"prompt\":\"How are you?\\n\\n\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x143b641711f5f9d9>) :url "https://api.openai.com/v1/completions" :response #s(request-response :status-code 404 :history nil :data ((error (message . "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?") (type . "invalid_request_error") (param . "model") (code))) :error-thrown (error http 404) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings #3 :-buffer #<killed buffer> :-raw-header "HTTP/2 404 \ndate: Sat, 18 Mar 2023 01:06:08 GMT\ncontent-type: application/json\ncontent-length: 227\naccess-control-allow-origin: *\nopenai-organization: [User ID]\nopenai-processing-ms: 103\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 3500\nx-ratelimit-limit-tokens: 90000\nx-ratelimit-remaining-requests: 3499\nx-ratelimit-remaining-tokens: 85999\nx-ratelimit-reset-requests: 17ms\nx-ratelimit-reset-tokens: 2.666s\nx-request-id: add53e20bfda8e445412a38d0147869b\n" :-timer nil :-backend curl) :encoding utf-8))
  request--curl-callback("https://api.openai.com/v1/completions" #<process request curl> "finished\n")
  apply(request--curl-callback ("https://api.openai.com/v1/completions" #<process request curl> "finished\n"))
  #f(compiled-function (&rest args2) #<bytecode 0x18e23160535de8dd>)(#<process request curl> "finished\n")
jcs090218 commented 1 year ago

This package doesn't use the Chat tunnel; instead, it uses the Completion channel.

https://github.com/emacs-openai/codegpt/blob/a8a8026430a74140e976aad3037a9a2f03698171/codegpt.el#L98

That's why the model wouldn't work.

This package isn't intended to use ChatGPT, but I'm receiving feedbacks and ideas from the users' perspective! :)


Support for GPT-4?

To answer your question (title), it's already supported in the upstream https://github.com/emacs-openai/openai but not this package (yet?). I would like to hear what other people think, it's often unnecessary to use ChatGPT unless we are making a conversation. Suppose we are going to implement this, then it will turn this package to something else. e.g., code-assistant, code-advisor, etc. :)

WDYT?

gabriben commented 1 year ago

For now the only way to access GPT-4 (which to me seems way better than GPT3.5, especially with regards to hallucination) is via the chat tunnel. So for now I think it would make sense to allow for the chat tunnel.

jcs090218 commented 1 year ago

It seems like one or more people would like to see this implemented; I will work on this over the weekend! :)

jcs090218 commented 1 year ago

It's in there, see https://github.com/emacs-openai/codegpt#-using-chatgpt for details.

jcs090218 commented 1 year ago

Here is another implementation of ChatGPT. It's more advance and focuses on making the conversation. It's pretty fun. Go check it out! :D

Link: https://github.com/emacs-openai/chatgpt