Open albertimff opened 5 hours ago
Do not use llm_model_func
# Initialize the LightRAG instance
rag = LightRAG(
working_dir=WORKING_DIR,
llm_model_func=ollama_model_complete,
llm_model_name="qwen2:32b",
llm_model_max_token_size=11000,
llm_model_kwargs={"host": "http://localhost:11434", "options": {"num_ctx": 11000}},
embedding_func=EmbeddingFunc(
embedding_dim=1024,
max_token_size=8192,
func=lambda texts: openai_embedding(
texts, model="bge-m3", base_url="http://127.0.0.1:9997/v1", api_key="xinference"
),
),
)
how to adjust queryparam's parameters.Thanks!
how to adjust queryparam's parameters.Thanks!
The code above already writes the parameters
I intend for this
I adjust the parameter in the lightrag.py but when i run lightrag_openai_compatible_demo.py the terminal and log shows that i didn't change parameter
WHY