mmz-001 / knowledge_gpt

Accurate answers and instant citations for your documents.
https://knowledgegpt.streamlit.app/
MIT License
1.6k stars 742 forks source link

This model's maximum context length is 4097 tokens, however you requested 5762 tokens (5506 in your prompt; 256 for the completion). Please reduce your prompt; or completion length. #8

Closed jrt324 closed 1 year ago

jrt324 commented 1 year ago

input : Chinese short article(6000 words)

Ask any question, follow Error: This model's maximum context length is 4097 tokens, however you requested 5762 tokens (5506 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.

mmz-001 commented 1 year ago

Glad you pointed this out. Out of curiosity, how big was the input question?

jrt324 commented 1 year ago

Glad you pointed this out. Out of curiosity, how big was the input question?

Just 8~20 words. eg: Can you tell me about the main characters included in this article

mmz-001 commented 1 year ago

I'll look into this. It'll be super helpful if you can give me a link to this article so that I repro the error.

elisa-chou commented 1 year ago

Having the same problem too. I'm asking "what is this content mainly about?"

Error msg: This model's maximum context length is 4097 tokens, however you requested 4812 tokens (4556 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.

mmz-001 commented 1 year ago

@elisa-chou The issue seems to be that I'm splitting the document according to number of characters, however, it should be split according to the number of tokens. I'll fix this soon.