Open Tom1009840152 opened 2 weeks ago
I am also facing the same issue. If someone can share the working env file it would be good as I believe we also need to put graphRag config
Hi there is high chance that these env_variable is not properly set.
# settings for GraphRAG
GRAPHRAG_API_KEY=openai_key
GRAPHRAG_LLM_MODEL=gpt-4o-mini
GRAPHRAG_EMBEDDING_MODEL=text-embedding-3-small
It is in the .env file. https://stackoverflow.com/questions/48607302/using-env-files-to-set-environment-variables-in-windows You can try to follow these procedure.
Thank you for your answer. I have made the configurations I can use Rag normally, but I cannot use Graphrag. I guess I need to configure graphRag?
Yes. You need to setup the environment variable above. Due to one limitation of our current implementation these GraphRAG env var won't read automatically from .env. We will work on an easy way to setup GraphRAG parameter on the UI in the next release.
What should go in GRAPHRAG_API_KEY=openai_key? I should use the same Azure Open AI key that I am using for AzureOpenAI?
Got the same error in Mac. Setting up the environment variables solved the issue, using dotenv run -- python app.py
@Laksh-star what value did you provide in GRAPHRAG_API_KEY? I am using Azure OpenAI
I used openAI, so gave that key value for GRAPHRAG_API_KEY. Logically you should use Azure OpenAI. But the sample env file here gives this.
# settings for GraphRAG
GRAPHRAG_API_KEY=openai_key
GRAPHRAG_LLM_MODEL=gpt-4o-mini
GRAPHRAG_EMBEDDING_MODEL=text-embedding-3-small
you can try first with Azure.
I put AzureOpenAI key in place of GRAPHRAG_API_KEY=openai_key but it didn't worked
For Azure OpenAI, please follow https://microsoft.github.io/graphrag/posts/config/env_vars/ Gonna be a bit more complicated than normal OpenAI
Also use the command suggested here https://github.com/Cinnamon/kotaemon/issues/140#issuecomment-2315706967 to load from .env file at start up.
Or, alternately https://pypi.org/project/python-dotenv-run/
Hi - can we use a local / Ollama embedding model instead of using something requiring an API key?
@ryansh1x yes you can, anything that uses openai compatible API will work. You just need to add your embedding endpoint and then create a new file collection that uses that endpoint. Also, please consider making a separate issue so that others with the same question can refer to.
@ryansh1x yes you can, anything that uses openai compatible API will work. You just need to add your embedding endpoint and then create a new file collection that uses that endpoint. Also, please consider making a separate issue so that others with the same question can refer to.
will do, shortly - appreciate your input. Once i've gotten the new issue created, I'd appreciate a more noob-level pointer - if I can get the graph portion of this working locally, I'll be able to argue for swapping into this for the majority of my use case.
@ryansh1x sure thing, feel free to try it out and submit any issue/question. I'm not familiar with setting up the graph portion as I'm also a noob myself, but I'm sure other folks can help you out. Cheers !
Hi - can we use a local / Ollama embedding model instead of using something requiring an API key?
Note that GRAPHRAG with Ollama OpenAI endpoint is also a bit tricky. Please wait why we are streamlining this process to mainstream user.
Had similar issues with ollama and graphrag. Setting the graphrag version to 0.3.2 fixed the issues for me. I can now use ollama with graphrag. Edit this line: "RUN pip install graphrag future" to "RUN pip install graphrag=0.3.2 future" Rebuild the image.
Graphrag chat not work.
Had similar issues with ollama and graphrag. Setting the graphrag version to 0.3.2 fixed the issues for me. I can now use ollama with graphrag. Edit this line: "RUN pip install graphrag future" to "RUN pip install graphrag=0.3.2 future" Rebuild the image.
Could you please share with us what are your graphrag variables in the .env
?
I have set the following but still not working for me:
# settings for GraphRAG
GRAPHRAG_API_KEY=openai_key
GRAPHRAG_LLM_MODEL=llama3:70b
GRAPHRAG_EMBEDDING_MODEL=nomic-embed-text
bonjour, j'ai aussi le mΓͺme problΓ©me, remplacer la ligne Β« RUN pip install graphrag future Β» en Β« RUN pip install graphrag=0.3.2 future Β» et recontruire l'image ne l'a pas resolue la mΓͺme erreur que celle de Tom1009840152. connaissez vous svp une solution?
Got the same error in Mac. Setting up the environment variables solved the issue, using
dotenv run -- python app.py
I encountered the same problem when using Macbook Pro M1, but the above method is very effective. The issue lies in the env parameters not being read correctly. I hope the next version can fix this bug. Also, I found that using different reasoning methods, some cannot read graphrag information.
Description
Hi, First of all, thank you for this great project! I have been experimenting with it and encountered an issue that I hope you can help me with.
Description I cloned the repository and followed the instructions to upload and index a PDF file. The file was successfully uploaded and processed into chunks, but the process failed at the create_base_entity_graph step. Below are the details:
Reproduction steps
Screenshots
No response
Logs
Browsers
Chrome
OS
Windows
Additional information
No response