Closed Milor123 closed 1 month ago
New updated (Not solve) but I've tried change from Ollama (local installed in my pc) to Docker Ollama, why? Because I also have other problems related with the normal ollama
And Now have:
But it not work while try use milvuys (my container of anythingllm crashs with it), when use a docker chroma instead of milvus, then it works very well.
It looks like a bug @shatfield4 and not SO problem
While the error for the milvus connection is not very helpful, we can at least handled the exception to prevent a full crash of AnythingLLM, since that is an annoyance
While the error for the milvus connection is not very helpful, we can at least handled the exception to prevent a full crash of AnythingLLM, since that is an annoyance
How can i help you to depurate it? what logs or what do you need?
Oh there are no more logs to get, i was just saying how the logs from their library does not help us out much since it has to do with GRPC connection termination. The stack you provided should be enough to help us get started to replicate this - assuming your connection information is accurate and that is not what is causing the issue
Oh there are no more logs to get, i was just saying how the logs from their library does not help us out much since it has to do with GRPC connection termination. The stack you provided should be enough to help us get started to replicate this - assuming your connection information is accurate and that is not what is causing the issue
Ahh I understand, thank u very much, hey bro I have a technical question not directly related, but the reason I was trying another vector database, is because I thought it would help me solve my problem. you see, I was trying to embed a document with words as if they were dictionaries.
example: Cat. It has four legs and says meow Dog Its a crazy zombie... etc etc Jagar Its not humanoid
I want the LLM, when the user asks for cat, to tell him what it is like. The problem is that I do not know if it is the embedding model or the vector database, but when I type the word, the search that is done does not find anything that has the keywords, when I search for dog, nothing appears. In many occasions it returns me a context or several useless contexts where the word is not there.
Is it that I misunderstand how all this works? or should I change the database or the embedding? So far I have changed the size of the contexts, the size in which I part the vectors, tried some different embedding models, and changed to chroma, but when I ask the LLM using only one word as it should be, then I get results that are NOT related, they are useless, so the LLM will never know the answer.
What should I do for try solve this problem? Change Vector DB or what do you think?
Email me this question at team@mintplexlabs.com - I dont want to pollute this GH issue with something off topic!
Thank u very much bro !
Email me this question at team@mintplexlabs.com - I dont want to pollute this GH issue with something off topic!
How are you running AnythingLLM?
Docker (local)
What happened?
I've tried search in the docs but nothing is here https://docs.useanything.com/setup/vector-database-configuration/local/milvus
I've used a docker image of milvus, and it allow me to access web panel from my pc, but anythingllm have a error when try embed a document.
My milvus is https://github.com/milvus-io/milvus/releases/tag/v2.4.9
And I've use it with podman-compose.
Milvus console output:
Error from podman logs:
Thank u very much guys!
Are there known steps to reproduce?
Upload a file, nice Then search the file and move to the space, OK but when i try "Save and Embed" the docker image crash