TheAiSingularity / graphrag-local-ollama

Local models support for Microsoft's graphrag using ollama (llama3, mistral, gemma2 phi3)- LLM & Embedding extraction
MIT License
662 stars 95 forks source link

embeddings/api_base is ignored in settings.yaml #74

Open bertalan-fodor opened 4 days ago

bertalan-fodor commented 4 days ago

For me it seems, that my setting for embeddings api base is ignored. The fragment is:

type: openai_embedding # or azure_openai_embedding
model: nomic-embed-text
api_base: http://192.168.50.13:11434/v1

However, in the logs I see the following: 20:02:25,224 graphrag.index.verbs.text.embed.strategies.openai INFO embedding 125 inputs via 125 snippets using 8 batches. max_batch_size=16, max_tokens=8191 20:02:25,232 httpx INFO HTTP Request: POST http://127.0.0.1:11434/api/embeddings "HTTP/1.1 404 Not Found"

The config is correctly read and displayed when running with --verbose. Just the value is not used when the actual call is made.