GraphRAG using Local LLMs - Features robust API and multiple apps for Indexing/Prompt Tuning/Query/Chat/Visualizing/Etc. This is meant to be the ultimate GraphRAG/KG local LLM app.
MIT License
1.73k
stars
206
forks
source link
Use another LLM provider but still access ollama #77
I'm using xinference and changed .env and settings.yaml
but start app.py still got error like this:
Exception while fetching openai_chat models: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /v1/models (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f153cccf9b0>: Failed to establish a new connection: [Errno 111] Connection refused'))
llm:
api_key: ${GRAPHRAG_API_KEY}
type: openai_chat # or azure_openai_chat
model: Qwen1.5-14B-Chat-GPTQ-Int4
model_supports_json: true # recommended if this is available for your model.
# max_tokens: 4000
# request_timeout: 180.0
api_base: http://172.17.22.174:9997/v1
I'm using xinference and changed .env and settings.yaml
but start app.py still got error like this:
Exception while fetching openai_chat models: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /v1/models (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f153cccf9b0>: Failed to establish a new connection: [Errno 111] Connection refused'))
.env
settings.yaml: