Open havvk opened 3 months ago
https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings As you can see , there is no usage data from Ollma Embeddings API:
@havvk My error is similar to yours, probably a format verification error by pydantic.
return EmbeddingResponse(
object="list",
data=embeddings,
model=request.model,
usage={},
)
when i use this command to query 'python3 -m graphrag.query --data ./indexing/output/20240802-172944/artifacts --method local --community_level 2 "What are the top themes in this story?"'
Error embedding chunk {'OpenAIEmbedding': "Error code: 422 - {'detail': [{'type': 'string_type', 'loc': ['body', 'input'], 'msg': 'Input should be a valid string', 'input': [3923, 527, 279, 1948, 22100, 304, 420, 3446, 30]}]}"}
[] None
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/graphrag/query/main.py", line 76, in
It's throw this error.
I saw the input of this embeding is [3923, 527, 279, 1948, 22100, 304, 420, 3446, 30] , not string .
Does anyone face same issue?
@JayWu890225 Did you solve the problem ? I have same issue
i also have the problem like : HTTP Request: POST http://localhost:11435/embeddings "HTTP/1.1 404 Not Found"
@JayWu890225 Did you solve the problem ? I have same issue
actually i did not find root casue of this issue.
I changed some code in embed method - embedding.py.
for chunk in token_chunks:
try:
embedding, chunk_len = self._embed_with_retry(text, **kwargs)
chunk_embeddings.append(embedding)
chunk_lens.append(chunk_len)
i also have the problem like : HTTP Request: POST http://localhost:11435/embeddings "HTTP/1.1 404 Not Found"
You should set api_base of embeddings to "http://localhost:11435/v1" in settings.yaml.
@JayWu890225 Did you solve the problem ? I have same issue
actually i did not find root casue of this issue.
I changed some code in embed method - embedding.py.
for chunk in token_chunks: try: embedding, chunk_len = self._embed_with_retry(text, **kwargs) chunk_embeddings.append(embedding) chunk_lens.append(chunk_len)
I found solution here https://github.com/microsoft/graphrag/issues/451#issuecomment-2220861232
After running embedding proxy by embedding_proxy.py:
python embedding_proxy.py --port 11435 --host http://localhost:11434
I changed the api_base parameter to embedding proxy in settings.yaml :
Then I tried to run indexing by the new index_app.py, it failed at the end stage.
There are some errors in the embedding proxy:
It seems there is no data of usage coming from Ollma (model: nomic-embed-text:latest).