TheAiSingularity / graphrag-local-ollama

Local models support for Microsoft's graphrag using ollama (llama3, mistral, gemma2 phi3)- LLM & Embedding extraction
MIT License
779 stars 116 forks source link

修改了配置文件中的嵌入模型地址未生效 #17

Open liuzyong opened 4 months ago

liuzyong commented 4 months ago

修改了配置文件中的嵌入模型的名字和api_base:

embeddings:
  ## parallelization: override the global parallelization settings for embeddings
  async_mode: threaded # or asyncio
  llm:
    api_key: ${GRAPHRAG_API_KEY}
    type: openai_embedding # or azure_openai_embedding
    model: nomic_embed_text
    api_base: http://192.168.130.19:11434/api

执行代码时为什么报错

15:21:24,876 graphrag.llm.openai.create_openai_client INFO Creating OpenAI client base_url=http://192.168.130.19:11434/api
15:21:25,554 graphrag.index.llm.load_llm INFO create TPM/RPM limiter for nomic_embed_text: TPM=0, RPM=0
15:21:25,554 graphrag.index.llm.load_llm INFO create concurrency limiter for nomic_embed_text: 25
15:21:25,561 graphrag.index.verbs.text.embed.strategies.openai INFO embedding 116 inputs via 116 snippets using 8 batches. max_batch_size=16, max_tokens=8191
15:21:36,818 httpx INFO HTTP Request: POST http://127.0.0.1:11434/api/embeddings "HTTP/1.1 503 Service Unavailable"
TheAiSingularity commented 4 months ago

修改了配置文件中的嵌入模型的名字和api_base:

embeddings:
  ## parallelization: override the global parallelization settings for embeddings
  async_mode: threaded # or asyncio
  llm:
    api_key: ${GRAPHRAG_API_KEY}
    type: openai_embedding # or azure_openai_embedding
    model: nomic_embed_text
    api_base: http://192.168.130.19:11434/api

执行代码时为什么报错

15:21:24,876 graphrag.llm.openai.create_openai_client INFO Creating OpenAI client base_url=http://192.168.130.19:11434/api
15:21:25,554 graphrag.index.llm.load_llm INFO create TPM/RPM limiter for nomic_embed_text: TPM=0, RPM=0
15:21:25,554 graphrag.index.llm.load_llm INFO create concurrency limiter for nomic_embed_text: 25
15:21:25,561 graphrag.index.verbs.text.embed.strategies.openai INFO embedding 116 inputs via 116 snippets using 8 batches. max_batch_size=16, max_tokens=8191
15:21:36,818 httpx INFO HTTP Request: POST http://127.0.0.1:11434/api/embeddings "HTTP/1.1 503 Service Unavailable"

is ollama server running?

liuzyong commented 4 months ago

Use nginx proxy to http://192.168.130.19:11434/api is ok,i guess the yaml file don't effective

zhaoxp-xyz commented 3 months ago

15:21:36,818 httpx INFO HTTP Request: POST http://127.0.0.1:11434/api/embeddings "HTTP/1.1 503 Service Unavailable" so,your ollama is good?