Set the default context to send only the latest 9 historical messages to avoid exceeding the tokens limit error after chatting for a period of time.
Change the maximum number of history messages by setting the environment variable PUBLIC_MAX_HISTORY_MESSAGES if needed.
It's not recommended to set it too low, such as 2, as it can easily lead to AI making random mistakes, even if it is GPT-4.
Linked Issues
309
Additional context
Anse is a very useful application, but ChatGPT-Demo is more suitable for deployment to relatives and friends who don’t know much about software.
However, during the use of ChatGPT-Demo, when the chat goes on for a long time, the context_length_exceeded error will easily occur.
After studying the file code of ChatGPT-Demo, I found that when it sends a message to the OpenAI API, it will send all historical messages together. No wonder it exceeds the token limit after chatting for a period of time.
This will also cause excessive consumption of tokens.
Therefore, I think it might be appropriate to set a certain length limit on the context messages sent to the OpenAI API. This can ensure that the chat continues in most cases.
Even though the AI might become "forgetful," it would still provide a better experience than clearing all chat history and starting over.
I set the default context message count to 9, which allows the OpenAI API to receive the latest 4 complete question and answer exchanges, as well as the most recent question.
Description
Set the default context to send only the latest 9 historical messages to avoid exceeding the tokens limit error after chatting for a period of time.
Change the maximum number of history messages by setting the environment variable
PUBLIC_MAX_HISTORY_MESSAGES
if needed.Linked Issues
309
Additional context
Anse
is a very useful application, butChatGPT-Demo
is more suitable for deployment to relatives and friends who don’t know much about software.However, during the use of
ChatGPT-Demo
, when the chat goes on for a long time, thecontext_length_exceeded
error will easily occur.After studying the file code of
ChatGPT-Demo
, I found that when it sends a message to the OpenAI API, it will send all historical messages together. No wonder it exceeds the token limit after chatting for a period of time.This will also cause excessive consumption of tokens.
Therefore, I think it might be appropriate to set a certain length limit on the context messages sent to the OpenAI API. This can ensure that the chat continues in most cases.
Even though the AI might become "forgetful," it would still provide a better experience than clearing all chat history and starting over.
I set the default context message count to 9, which allows the OpenAI API to receive the latest 4 complete question and answer exchanges, as well as the most recent question.