Closed ahgsql closed 1 year ago
Reducing the amount of sources used sometimes helps with this problem, not exactly sure why it happens but it's mainly thought to occur because the retrievalchain inputted to much context in a chunk to the llm
You can also use gpt-4 with the 8k token limit if you have it
Use the map_reduce method
Got en error about token size. How to limit it to avoid Errors?
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4599 tokens. Please reduce the length of the messages