Closed Jumbo-zczlbj0 closed 4 months ago
GRAPHRAG_API_KEY=
To: Jumbo: how to run chmod +x LM_Studio-0.2.27.AppImage in windows 10?
I can not find LM_Studio-0.2.27.AppImage file in LM_Studio installation directory. If this is a *.exe file?
thanks
To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly.
And suggestion about setting.yaml?
in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest",
Thanks
To: Jumbo: how to run chmod +x LM_Studio-0.2.27.AppImage in windows 10?
I can not find LM_Studio-0.2.27.AppImage file in LM_Studio installation directory. If this is a *.exe file?
thanks
I use Lm_studio on an Ubuntu system. You can download the Windows version from the official website: https://lmstudio.ai
To: Jumbo: how to run chmod +x LM_Studio-0.2.27.AppImage in windows 10? I can not find LM_Studio-0.2.27.AppImage file in LM_Studio installation directory. If this is a *.exe file? thanks
I use Lm_studio on an Ubuntu system. You can download the Windows version from the official website: https://lmstudio.ai
I use windows10. In windows, I just run LM_Studio, I think this is OK.
To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly.
And suggestion about setting.yaml?
in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest",
Thanks
I don't know if you have installed Ollama. I referred to this video: https://youtu.be/BLyGDTNdad0?si=Pvx1og9aD_5fmZa3
I installed graphrag in Docker to avoid this bug.
I am using the official NVIDIA docker image (CUDA =12.2, Ubuntu22.04, devel)
By the way, LM_Studio can be replaced with llama.cpp: https://github.com/ggerganov/llama.cpp
To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly. And suggestion about setting.yaml? in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest", Thanks
I don't know if you have installed Ollama. I referred to this video: https://youtu.be/BLyGDTNdad0?si=Pvx1og9aD_5fmZa3
Hi,Jumbo: thanks.
in ollama directory, I run ollama pull gemma2:9b
ollama run gemma2:9b it works.
but the problem is that when running below command
curl http://localhost:11434/v1/chat/completions
the result is: 404 page not found
this means that gemma2 port is not recognized.
To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly. And suggestion about setting.yaml? in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest", Thanks
I don't know if you have installed Ollama. I referred to this video: https://youtu.be/BLyGDTNdad0?si=Pvx1og9aD_5fmZa3
Hi,Jumbo: thanks.
in ollama directory, I run ollama pull gemma2:9b
ollama run gemma2:9b it works.
but the problem is that when running below command
curl http://localhost:11434/v1/chat/completions
the result is: 404 page not found
this means that gemma2 port is not recognized.
Please refer to the official way:https://github.com/ollama/ollama/blob/main/docs/api.md
For example: curl http://localhost:11434/api/chat -d '{ "model": "gemm2:latest", "messages": [ { "role": "user", "content": "hi" } ] }'
Consolidating alternate model issues here: https://github.com/microsoft/graphrag/issues/657
Describe the bug
"Sometimes, running the command
python3 -m graphrag.index --root ./ragtest
results in the error 'Errors occurred during the pipeline run, see logs for more details,' even though no configuration changes were made. This issue may occur after restarting the computer. I have tried deleting the original environment and creating a new one. Sometimes it works well, and sometimes it doesn't."The previous model was running smoothly and had successfully answered my questions. And my Ollama is functioning properly and the model has been downloaded.
I am a beginner, so I might not understand everything fully. Please bear with me
settings.yaml:
log: (ragtest/output/20240716-034934/reports/indexing-engine.log)
indexing-engine.log
Steps to reproduce
Run Lm-studio
chmod +x LM_Studio-0.2.27.AppImage
python3 -m graphrag.index --root ./ragtest
Expected Behavior
I expect the LLM and embedding to process my data correctly
GraphRAG Config Used
No response
Logs and screenshots
No response
Additional Information