Open yakeworld opened 4 months ago
python -m graphrag.query --root ./ragtest --method local "What is microscopy ?"
INFO: Reading settings from ragtest/settings.yaml creating llm client with model: llama3 creating embedding llm client with model: nomic-embed-text:latest ERROR:root:Error getting OpenAI compatible embedding: 404 page not found ERROR:root:Failed to generate embedding ERROR:root:Failed to generate embedding for query
Thanks for sharing this! I'm trying to find a solid workaround but it seems like I have to move away from Ollama until their embedding becomes more compatible and robust. I am moving to a more API-centric design for the app so hopefully we can solve the issue through those means. I'll keep diving into this to see what I can do
The command
python -m graphrag.index --root ./ragtest
runs successfully when the cache and output directories already exist.However, after deleting the cache and output directories and running the command again:
The following error occurs:
This issue is primarily due to a bug in the Ollama embedding service. To resolve this, we need to use an alternative embedding service.