emacs-openai / codegpt

Use GPT-3 inside Emacs
GNU General Public License v3.0
86 stars 12 forks source link

Bad request? #9

Closed chriselrod closed 1 year ago

chriselrod commented 1 year ago

For examplem, running M-x codegpt-improve, I get in *Messages*:

[error] request--callback: peculiar error: 400
error in process sentinel: openai--handle-error: 400 - Bad request.  Please check error message and your parameters
error in process sentinel: 400 - Bad request.  Please check error message and your parameters

Perhaps I have misconfigured codegpt and/or openai?

(use-package openai
  :straight (openai :type git :host github :repo "emacs-openai/openai")
  :custom
  (openai-key "mysecretkey")
  (openai-user "myemailaddress"))

(use-package codegpt
  :straight (codegpt :type git :host github :repo "emacs-openai/codegpt"))

Except of course mysecretkey and myemailaddress are the key from openai and the email address of the account, respectively.

400 suggests a client side problem, making this look like an issue on my side?

jcs090218 commented 1 year ago

I tried it today but couldn't reproduce this issue. Bad request is very generic, it could mean you have problem with your "key", or sending an invalid request. Suppose there is no error with async library, and I am not able to reproduce this. I suggest you to check your environment, like key, network connection, etc.

esilvert commented 1 year ago

Hello, I've the exact same error. What is interesting is that I was able to ask exactly once the API with success before anything fails. So I've explain a part of my code for testing, then codegpt started to return 400 no matter what.

I've installed it with use-package:

(use-package codegpt)

Then customized the Openai Key of the openai group to set my Secret Key.

EDIT: I just tried codegpt-custom asking a random question and it worked. Afterward, codegpt-explain kept working until I ask to improve my ruby code. Maybe there lack some escaping ?

So I've tried reproducing my error with a generic code that has the same structure and interesting enough is that I couldn't make it fail until I have exactly this block; meaning that removing one line of them four is fixing the issue ...

  def action
    @model.assign_attributes({ attribute_name: 'value', **strong_params})

    @model.status = if @model.attribute_id == @other_model.relation.attribute_id # random comment
                      'some_value'
                    else
                      'other_value'
                    end

    @model.save!

    redirect_to :action_name
  end
chriselrod commented 1 year ago

Stacktrace:

Debugger entered--Lisp error: (error "400 - Bad request.  Please check error message and...")
  signal(error ("400 - Bad request.  Please check error message and..."))
  error("400 - Bad request.  Please check error message and...")
  openai--handle-error(#s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-wwJZt6VXW8EqHjxEtSb0T3BlbkFJYQWJ2umf7rEF...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x15797ceab376fcd9>) :url "https://api.openai.com/v1/completions" :response #1 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Thu, 16 Mar 2023 13:03:51 GMT\nco..." :-timer nil :-backend curl))
  #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>)(:data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :symbol-status error :error-thrown (error http 400) :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-wwJZt6VXW8EqHjxEtSb0T3BlbkFJYQWJ2umf7rEF...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x15797ceab376fcd9>) :url "https://api.openai.com/v1/completions" :response #8 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Thu, 16 Mar 2023 13:03:51 GMT\nco..." :-timer nil :-backend curl))
  apply(#f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) (:data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :symbol-status error :error-thrown (error http 400) :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-wwJZt6VXW8EqHjxEtSb0T3BlbkFJYQWJ2umf7rEF...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x15797ceab376fcd9>) :url "https://api.openai.com/v1/completions" :response #10 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Thu, 16 Mar 2023 13:03:51 GMT\nco..." :-timer nil :-backend curl)))
  request--callback(#<killed buffer> :error #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-wwJZt6VXW8EqHjxEtSb0T3BlbkFJYQWJ2umf7rEF...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x15797ceab376fcd9>) :url "https://api.openai.com/v1/completions" :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-wwJZt6VXW8EqHjxEtSb0T3BlbkFJYQWJ2umf7rEF...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x15797ceab376fcd9>) :url "https://api.openai.com/v1/completions" :response #17 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Thu, 16 Mar 2023 13:03:51 GMT\nco..." :-timer nil :-backend curl) :encoding utf-8)
  apply(request--callback #<killed buffer> (:error #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-wwJZt6VXW8EqHjxEtSb0T3BlbkFJYQWJ2umf7rEF...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x15797ceab376fcd9>) :url "https://api.openai.com/v1/completions" :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings #3 :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Thu, 16 Mar 2023 13:03:51 GMT\nco..." :-timer nil :-backend curl) :encoding utf-8))
  request--curl-callback("https://api.openai.com/v1/completions" #<process request curl> "finished\n")
  apply(request--curl-callback ("https://api.openai.com/v1/completions" #<process request curl> "finished\n"))
  #f(compiled-function (&rest args2) #<bytecode 0x18e231ee82eb3ddd>)(#<process request curl> "finished\n")

Note that my now revoked key was included in the above message (it wasn't revoked at the time I tried it).

*CodeGPT*

Please improve the following.

constexpr auto getMaxDigits(PtrMatrix<Rational> A) -> Vector<size_t> {
  size_t M = size_t(A.numRow());
  size_t N = size_t(A.numCol());
  Vector<size_t> maxDigits{unsigned(N), 0};
  invariant(size_t(maxDigits.size()), N);
  // this is slow, because we count the digits of every element
  // we could optimize this by reducing the number of calls to countDigits
  for (Row i = 0; i < M; i++) {
    for (size_t j = 0; j < N; j++) {
      size_t c = countDigits(A(i, j));
      maxDigits[j] = std::max(maxDigits[j], c);
    }
  }
  return maxDigits;
}

I also asked it to improve my code. Explaining (What is the following? didn't work either.

From the error messages, I see

"This model's maximum context length is 4097 tokens..."

This seems like it should be well under 4097 tokens. Is it passing a large amount of additional context?

jcs090218 commented 1 year ago

EDIT: I just tried codegpt-custom asking a random question and it worked. Afterward, codegpt-explain kept working until I ask to improve my ruby code. Maybe there lack some escaping ?

I think once I have encountered something similar to this, and my best guess was escaping as well. But I eventually move on since I couldn't pin down the culprit...

So I've tried reproducing my error with a generic code that has the same structure and interesting enough is that I couldn't make it fail until I have exactly this block; meaning that removing one line of them four is fixing the issue ...

Thanks for posting your code here. I will give it a try, and see what I can do to resolve this!

This seems like it should be well under 4097 tokens. Is it passing a large amount of additional context?

I am not 100% sure how OpenAI calculates their tokens, I've tried it today, but the token "count" seems to be a bit odd. 🤔

johanvts commented 1 year ago

I also get the "peculiar error: 400" with well under 1000 tokens, pretty much for everything.

johanvts commented 1 year ago

I suspect it has to do with having quotation marks in my prompt.

jcs090218 commented 1 year ago

openai.el uses json-encode to encode the value, here is a sample result from the ruby code above https://github.com/emacs-openai/codegpt/issues/9#issuecomment-1471663322.

{"model":"text-davinci-003","prompt":"Please improve the following.\n\ndef action\n  @model.assign_attributes({ attribute_name: 'value', **strong_params})\n\n  @model.status = if @model.attribute_id == @other_model.relation.attribute_id # random comment\n                    'some_value'\n                  else\n                    'other_value'\n                  end\n\n  @model.save!\n\n  redirect_to :action_name\nend\n\n","max_tokens":4000,"temperature":1.0}

It looks good to me, so I have no idea why this ends up Bad request 400. 😕

chriselrod commented 1 year ago

What did you run to see the request?

jcs090218 commented 1 year ago

I printed it in the code, so there isn't a way by default. However, I've added a debug flag, so you can see it by (setq openai--show-log t). Make sure you update to the latest version!

chriselrod commented 1 year ago
[ENCODED]: {"model":"text-davinci-003","prompt":"Please improve the following.\n\n/// \\brief Returns the maximum number of digits per column of a matrix.\nconstexpr auto getMaxDigits(PtrMatrix<Rational> A) -> Vector<size_t> {\n  size_t M = size_t(A.numRow());\n  size_t N = size_t(A.numCol());\n  Vector<size_t> maxDigits{unsigned(N), 0};\n  invariant(size_t(maxDigits.size()), N);\n  // this is slow, because we count the digits of every element\n  // we could optimize this by reducing the number of calls to countDigits\n  for (Row i = 0; i < M; i++) {\n    for (size_t j = 0; j < N; j++) {\n      size_t c = countDigits(A(i, j));\n      maxDigits[j] = std::max(maxDigits[j], c);\n    }\n  }\n  return maxDigits;\n}\n\n","max_tokens":4000,"temperature":1.0,"user":"elrodc@gmail.com"}
[error] request--callback: peculiar error: 400
openai--handle-error: 400 - Bad request.  Please check error message and your parameters

According to: https://platform.openai.com/tokenizer This corresponds to 265 tokens for GPT-3?

I didn't realize I could expand the ... in the error mesages. The full error message says:

"This model's maximum context length is 4097 tokens, however you requested 4236 tokens (236 in your prompt; 4000 for the completion). Please reduce your prompt; or completion length."

So seems it is 236 tokens for the prompt using text-davinci-003.

What does the "4000 for the completion" come from? I see we're setting "max_tokens":4000, but your query has this, too, and is also at least 97 tokens so it isn't prompt + 4000 vs 4097.

chriselrod commented 1 year ago

If anyone wants to stare at the backtrace:

Debugger entered--Lisp error: (error "400 - Bad request.  Please check error message and...")
  error("400 - Bad request.  Please check error message and your parameters")
  openai--handle-error(#s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens, however you requested 4236 tokens (236 in your prompt; 4000 for the completion). Please reduce your prompt; or completion length.") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-mykey...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x160b54aee9a256d9>) :url "https://api.openai.com/v1/completions" :response #1 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Sat, 18 Mar 2023 04:24:24 GMT\ncontent-type: application/json\ncontent-length: 294\naccess-control-allow-origin: *\nopenai-model: text-davinci-003\nopenai-organization: user-kfjwl04tenq80dxhlhnwmto6\nopenai-processing-ms: 3\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 60\nx-ratelimit-limit-tokens: 150000\nx-ratelimit-remaining-requests: 59\nx-ratelimit-remaining-tokens: 146000\nx-ratelimit-reset-requests: 1s\nx-ratelimit-reset-tokens: 1.6s\nx-request-id: 7bef8a7e775e32e65ed6632c41e77ce7\n" :-timer nil :-backend curl))
  #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11>(:data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :symbol-status error :error-thrown (error http 400) :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-mykey...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x160b54aee9a256d9>) :url "https://api.openai.com/v1/completions" :response #8 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Sat, 18 Mar 2023 04:24:24 GMT\nco..." :-timer nil :-backend curl))
  apply(#<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> (:data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :symbol-status error :error-thrown (error http 400) :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-mykey...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x160b54aee9a256d9>) :url "https://api.openai.com/v1/completions" :response #10 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Sat, 18 Mar 2023 04:24:24 GMT\nco..." :-timer nil :-backend curl)))
  request--callback(#<killed buffer> :error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-mykey...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x160b54aee9a256d9>) :url "https://api.openai.com/v1/completions" :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-mykey...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please improve the following.\\n\\n/// \\\\brief Returns the maximum number of digits per column of a matrix.\\nconstexpr auto getMaxDigits(PtrMatrix<Rational> A) -> Vector<size_t> {\\n  size_t M = size_t(A.numRow());\\n  size_t N = size_t(A.numCol());\\n  Vector<size_t..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x160b54aee9a256d9>) :url "https://api.openai.com/v1/completions" :response #17 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Sat, 18 Mar 2023 04:24:24 GMT\nco..." :-timer nil :-backend curl) :encoding utf-8)
  apply(request--callback #<killed buffer> (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-mykey...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please improve the following.\\n\\n/// \\\\brief Returns the maximum number of digits per column of a matrix.\\nconstexpr auto getMaxDigits(PtrMatrix<Rational> A) -> Vector<size_t> {\\n  size_t M = size_t(A.numRow());\\n  size_t N = size_t(A.numCol());\\n  Vector<size_t> maxDigits{unsigned(N), 0};\\n  invariant(size_t(maxDigits.size()), N);\\n  // this is slow, because we count the digits of every element\\n  // we could optimize this by reducing the number of calls to countDigits\\n  for (Row i = 0; i < M; i++) {\\n    for (size_t j = 0; j < N; j++) {\\n      size_t c = countDigits(A(i, j));\\n      maxDigits[j] = std::max(maxDigits[j], c);\\n    }\\n  }\\n  return maxDigits;\\n}\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0,\"user\":\"elrodc@gmail.com\"}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x160b54aee9a256d9>) :url "https://api.openai.com/v1/completions" :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens, however you requested 4236 tokens (236 in your prompt; 4000 for the completion). Please reduce your prompt; or completion length.") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings #3 :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Sat, 18 Mar 2023 04:24:24 GMT\ncontent-type: application/json\ncontent-length: 294\naccess-control-allow-origin: *\nopenai-model: text-davinci-003\nopenai-organization: user-kfjwl04tenq80dxhlhnwmto6\nopenai-processing-ms: 3\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 60\nx-ratelimit-limit-tokens: 150000\nx-ratelimit-remaining-requests: 59\nx-ratelimit-remaining-tokens: 146000\nx-ratelimit-reset-requests: 1s\nx-ratelimit-reset-tokens: 1.6s\nx-request-id: 7bef8a7e775e32e65ed6632c41e77ce7\n" :-timer nil :-backend curl) :encoding utf-8))
  request--curl-callback("https://api.openai.com/v1/completions" #<process request curl> "finished\n")
  apply(request--curl-callback ("https://api.openai.com/v1/completions" #<process request curl> "finished\n"))
  #f(compiled-function (&rest args2) #<bytecode 0x18e21520d85a7ddd>)(#<process request curl> "finished\n")
jcs090218 commented 1 year ago

Ah, okay. Then I think the line is the culprit?

https://github.com/emacs-openai/codegpt/blob/a8a8026430a74140e976aad3037a9a2f03698171/codegpt.el#L66

Can you try to tweak the value down, and see if it works? Everything kinda make sense now. 🤔

chriselrod commented 1 year ago

Thanks, I think that fixed it. It printed a version with updated comments.