Open zhaozewang opened 1 year ago
The issue seems to be cause by the context length, not the paper length. Here's a similar problem https://github.com/hwchase17/langchain/issues/2133.
the same question
@cc-zehao @zhaozewang
You may consider modify your "model_interface.py" and use gpt-4-32k instead. I modified and it can process much more than using gpt3.5.
Available options: gpt-3.5-turbo, gpt-4, and gpt-4-32k
REF: https://platform.openai.com/docs/models/continuous-model-upgrades
Got this error:
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 7328 tokens. Please reduce the length of the messages.
Is it possible to break the paper into multiple pieces then query one paper piece at a time to avoid this issue?