Closed wilsonlv closed 6 months ago
It would seem to indicate that the Ollama embedder returned a zero-length vector. Can you confirm if this embedder is processing text chunks properly manually first?
Yea, if I add a English docx,it successes
Closing as wontfix, I did not notice prior but you have an LLM chat model llama2-chinese
as your embedding model.
LLMs cannot embed text. Please use an embedding model like nomic-text-embed
or mxbai
for Chinese lang support embeddings
How are you running AnythingLLM?
Docker (local)
What happened?
Are there known steps to reproduce?
No response