Open myrzx opened 1 year ago
Please reduce the length of the messages or completion.
Please reduce the length of the messages or completion.
It's the first request and I just said one sentence. I change the max_tokens=4096 to 2048, it run. What is '4096 in the completion' mean? 4096 empty tokens in the message to chatgpt api?
same question
same question
From OpenAI Document max_tokens The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens). Prompt tokens (your message) + max_tokens should <= 4096, else it raise error.
That flow works like that - previous messages are added to each subsequent request so that the context of the conversation is kept. so after a while, the number of tokens can get to more than 4096
That flow works like that - previous messages are added to each subsequent request so that the context of the conversation is kept. so after a while, the number of tokens can get to more than 4096
https://platform.openai.com/docs/api-reference/chat/create#chat/create-max_tokens max_tokens defaults to inf, if set 3800, the send messages must < 296 tokens, it will raises error quickly.
You: Hi, are you a chatbot for me? Traceback (most recent call last): File "C:\Users\Otp_Lab\Desktop\LXH2022\Fun\chat.py", line 77, in
main()
File "C:\Users\Otp_Lab\Desktop\LXH2022\Fun\chat.py", line 48, in main
response = send_message(message_log)
File "C:\Users\Otp_Lab\Desktop\LXH2022\Fun\chat.py", line 10, in send_message
response = openai.ChatCompletion.create(
File "E:\Python\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "E:\Python\lib\site-packages\openai\api_resources\abstract\engine_apiresource.py", line 153, in create
response, , api_key = requestor.request(
File "E:\Python\lib\site-packages\openai\api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "E:\Python\lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "E:\Python\lib\site-packages\openai\api_requestor.py", line 679, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4096 tokens. However, you requested 4135 tokens (39 in the messages, 4096 in the completion). Please reduce the length of the messages or completion.