severian42 / GraphRAG-Local-UI

GraphRAG using Local LLMs - Features robust API and multiple apps for Indexing/Prompt Tuning/Query/Chat/Visualizing/Etc. This is meant to be the ultimate GraphRAG/KG local LLM app.
MIT License
1.73k stars 206 forks source link

Use another LLM provider but still access ollama #77

Open rickywu opened 3 months ago

rickywu commented 3 months ago

I'm using xinference and changed .env and settings.yaml

but start app.py still got error like this:

Exception while fetching openai_chat models: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /v1/models (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f153cccf9b0>: Failed to establish a new connection: [Errno 111] Connection refused'))

.env

LLM_PROVIDER=openai
LLM_API_BASE=http://172.17.22.174:9997/v1
LLM_MODEL='Qwen1.5-14B-Chat-GPTQ-Int4'
LLM_API_KEY=''

EMBEDDINGS_PROVIDER=openai
EMBEDDINGS_API_BASE=http://172.17.22.174:9997/v1
EMBEDDINGS_MODEL='m3e-base'
EMBEDDINGS_API_KEY=''

settings.yaml:

llm:
  api_key: ${GRAPHRAG_API_KEY}
  type: openai_chat # or azure_openai_chat
  model: Qwen1.5-14B-Chat-GPTQ-Int4
  model_supports_json: true # recommended if this is available for your model.
  # max_tokens: 4000
  # request_timeout: 180.0
  api_base: http://172.17.22.174:9997/v1
Tovi163 commented 3 months ago

Exception while fetching openai_chat models: HTTPConnectionPool(host='localhost', port=11434)

default the request point to 127.0.0.1:11434, but your api_base point to "http://172.17.22.174:9997"

  1. make xinference start at 127.0.0.1:11434
  2. change GraphRAG-Local-UI source code, point to http://172.17.22.174:9997

@rickywu