This is an application project of 'chatgpt',only applicable to desktop environment.
557
stars
690
forks
source link
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4153 tokens (233 in your prompt; 3920 for the completion). Please reduce your prompt; or completion length. #15
你的 输入:
geiwoyigelizi
Traceback (most recent call last):
File "app.py", line 5, in
run()
File "/data/EASYChatGPT/bbot.py", line 24, in run
out = chatbot.ask(input_text)
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/revChatGPT/Official.py", line 50, in ask
stop=["\n\n\n"],
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_resources/completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 160, in create
request_timeout=request_timeout,
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_requestor.py", line 623, in _interpret_response
stream=False,
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_requestor.py", line 680, in _interpret_response_line
rbody, rcode, resp.data, rheaders, stream_error=stream_error
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4153 tokens (233 in your prompt; 3920 for the completion). Please reduce your prompt; or completion length.
你的 输入: 怎么设置你的Stop sequences ChatGPT 输出: Stop sequences 可以通过在GPT-3语言模型中设置一个特定的字符串来实现。这个字符串可以是一个简单的字符串,也可以是一个更复杂的正则表达式。当GPT-3模型遇到这个字符串时,它会停止生成文本。
你的 输入: geiwoyigelizi Traceback (most recent call last): File "app.py", line 5, in
run()
File "/data/EASYChatGPT/bbot.py", line 24, in run
out = chatbot.ask(input_text)
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/revChatGPT/Official.py", line 50, in ask
stop=["\n\n\n"],
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_resources/completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 160, in create
request_timeout=request_timeout,
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_requestor.py", line 623, in _interpret_response
stream=False,
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_requestor.py", line 680, in _interpret_response_line
rbody, rcode, resp.data, rheaders, stream_error=stream_error
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4153 tokens (233 in your prompt; 3920 for the completion). Please reduce your prompt; or completion length.