This model's maximum context length is 4097 tokens. However, you requested 4160 tokens (70 in the messages, 4090 in the completion). Please reduce the length of the messages or completion.
@TrampCGuo
This is a limit of GPT-3.5-turbo, GPT-3.5-turbo supports a maximum context length of 4097 tokens, which you can avoid by reducing the number of contexts in the settings page.
Describe the problem you confuse
This model's maximum context length is 4097 tokens. However, you requested 4160 tokens (70 in the messages, 4090 in the completion). Please reduce the length of the messages or completion.