Open CedricGoby opened 2 months ago
🤔 Hm, this is weird. Looks like the looping or failing to do the recursive summarization due to the model context being overflow but then being unable to even estimate tokens.
[backend] error: (node:112) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
Is actually normal and is coming from langchain - so the real issue is why LC cannot do the fallback calculation on tokens.
Looks like this was patched in later version of @langchain/community or @langchain/core so we need to bump those deps to patch this
Hi,
I'm using the docker version of Anything LLM and Chroma
The docker host is a small VM with 2 CPU, 4 Go RAM and no GPU.
The LLM is "mistral-large-2204", the vector database is "Chroma" and the embedder is the default "AnythingLLM Embedder".
I have 2 PDF files in my workspace : a small one and a bigger one.
If I ask the agent to summarize those PDF files it works for the small one :
But there is an error for the bigger PDF file :
Can you help me to understand what's wrong in that case ?
Regards