Open sandbury opened 2 days ago
@sandbury probably due to Ollama default context size is 2048 https://github.com/ollama/ollama/issues/1005 This mean event if retrieved documents are correct, documents at the end of the context is cropped due to the model context size and may produce less meaningful result.
@sandbury probably due to Ollama default context size is 2048 ollama/ollama#1005 This mean event if retrieved documents are correct, documents at the end of the context is cropped due to the model context size and may produce less meaningful result.
Do you have documentation explaining how you load data and perform chunking, as well as retrieving text, tables, and images?
Description
The accuracy of talking to a single document is very high, but when talking to two files, the accuracy is very low, but the information panel can display the most relevant content,I did my local rag with ollama
Reproduction steps
Screenshots
Logs
Browsers
No response
OS
No response
Additional information
No response