Closed stanribilir closed 6 months ago
This may help explain why that is the result. Given the length of your documents, even more so. Also could be work trying to ask a question via the @agent
invocation. https://docs.useanything.com/faq/why-is-llm-not-using-docs. There is always room to improve rag but that is not really what this issue is going for as it seems more informational
How are you running AnythingLLM?
AnythingLLM desktop app
What happened?
Hello. i've tested a pdf file with 69 pages many times with different embedding models. built-in model seems to be pretty fast but unfortunately i can not get what i ask for. it even doesn't retrieve page number for specific tag names, simple operation just like search. is there any improvement area for the built-in embedding model? (i also used built-in vectordb. another thing is i've also tried to test with ollama nomic embedding using ollama server ip, that took longer time but it's even worse output performance than the builtin in my case)
Are there known steps to reproduce?
No response