HKUDS / LightRAG

"LightRAG: Simple and Fast Retrieval-Augmented Generation"
https://arxiv.org/abs/2410.05779
MIT License
6.81k stars 747 forks source link

How to set llm parameter #194

Open albertimff opened 5 hours ago

albertimff commented 5 hours ago

I adjust the parameter in the lightrag.py image but when i run lightrag_openai_compatible_demo.py the terminal and log shows that i didn't change parameter image image

WHY

xldistance commented 4 hours ago

Do not use llm_model_func

# Initialize the LightRAG instance
rag = LightRAG(
    working_dir=WORKING_DIR,
    llm_model_func=ollama_model_complete,
    llm_model_name="qwen2:32b",
    llm_model_max_token_size=11000,
    llm_model_kwargs={"host": "http://localhost:11434", "options": {"num_ctx": 11000}},
    embedding_func=EmbeddingFunc(
        embedding_dim=1024,
        max_token_size=8192,
        func=lambda texts: openai_embedding(
            texts, model="bge-m3", base_url="http://127.0.0.1:9997/v1", api_key="xinference"
        ),
    ),
)
albertimff commented 3 hours ago

how to adjust queryparam's parameters.Thanks!

xldistance commented 3 hours ago

how to adjust queryparam's parameters.Thanks!

The code above already writes the parameters

albertimff commented 3 hours ago

image I intend for this