ChuloAI / BrainChulo

Harnessing the Memory Power of the Camelids
MIT License
145 stars 11 forks source link

Requested tokens exceed context window of 2048 #16

Closed metatwin closed 1 year ago

metatwin commented 1 year ago

Title: Error when processing large text document

Description: I attempted to process a text document with approximately 5600 words using the tool, and after 2-3 rounds of processing, I received an error message. The error message reads as follows:

"... for completion_chunk in completion_chunks: File "/Users/opt/anaconda3/envs/oobabooga/lib/python3.10/site-packages/llama_cpp/llama.py", line 618, in _create_completion raise ValueError( ValueError: Requested tokens exceed context window of 2048"

I'm not sure what caused this error? Are there any known issues with processing larger documents?

iGavroche commented 1 year ago

Hi @metatwin, this is an error on the LLM side. It means the LLM is overloaded with information between the prompt and the response size. I'll move your issue to a task, as we will certainly need to be smarter with how we're chunking documents. Right now, we're splitting them into 1000-word chunks. In your case you would want to reduce to perhaps 500 words? If you feel adventurous, you could make that change here in your local copy of BrainChulo: https://github.com/iGavroche/BrainChulo/blob/main/app/conversations/document_based.py#L26

iGavroche commented 1 year ago

Submitted https://github.com/iGavroche/BrainChulo/pull/17 to attempt a quick fix