gkamradt / langchain-tutorials

Overview and tutorial of the LangChain Library
6.63k stars 1.92k forks source link

Code Understanding Use case, running in limit of max tokens #27

Closed rjain15 closed 1 year ago

rjain15 commented 1 year ago

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4114 tokens (3858 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.

Not sure how to reduce the max_tokens, or prompt size.

gkamradt commented 1 year ago

You can reduce the max token size when you create your LLM object or change the prompt you're putting into the LLM. I'm not sure which piece of code this is being run on. You may need to switch your chunk size to be smaller too