I created LightRAG using OpenAI. I am able to query and retrieve context. However I realized that context is unstable (still relevant to query) for same query on each start of LightRAG within my app. It slightly changes (eg. single relationship or single entity) with exact same query params.
I used dunzhang/stella_en_400M_v5 embedding model during creation and querying.
Note: My app also uses async in server level.
Note: LLM params for hl and ll keyword extraction is constant and I checked keywords on different app starts and keywords are constant for same query.
Issue Description
I created LightRAG using OpenAI. I am able to query and retrieve context. However I realized that context is unstable (still relevant to query) for same query on each start of LightRAG within my app. It slightly changes (eg. single relationship or single entity) with exact same query params. I used dunzhang/stella_en_400M_v5 embedding model during creation and querying. Note: My app also uses async in server level. Note: LLM params for hl and ll keyword extraction is constant and I checked keywords on different app starts and keywords are constant for same query.
Environment
pypandoc==1.13 boto3==1.35.32 pydantic_core==2.23.4 pydantic==2.9.2 passlib==1.7.4 numpy==1.26.4 pandas==2.2.1 python_dateutil==2.8.2 pytz==2024.1 Requests==2.31.0 text_generation==0.6.1 faiss-cpu==1.7.4 sutime==1.0.1 fuzzywuzzy==0.18.0 transformers==4.37.2 haversine pyarrow cacheout termcolor scikit-learn regex nltk lightrag-hku==0.0.8 aioboto3==13.2.0 ollama==0.3.3 nano-vectordb==0.0.4.1 openai openai>=0.27.0 neo4j>=5.7.0 pybind11>=2.10.0 torch>=1.13.1 tiktoken>=0.3.0 networkx>=3.0 scipy>=1.10.1 spacy>=3.5.2 py2neo>=2021.2.3 nest-asyncio>=1.5.6
LightRAG Settings:
Any help would be appreciated