Open diguardiag opened 1 year ago
I don't know if it is really related, but for me the issue was resolved after specifying the openai model to use in query_data.py: llm=OpenAI(model_name="gpt-3.5-turbo", temperature=0)
Without that, it defaults to using text-davinci which is way more expensive. You can check which model is used by referring to https://platform.openai.com/account/usage
Agree with TychoML. It solves for my by substituting with model_name="gpt-3.5-turbo".
Ran the example, ingested the state_of_the_union, asked a few questions and every single time I was getting this error:
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4231 tokens (3975 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.