microsoft / graphrag

A modular graph-based Retrieval-Augmented Generation (RAG) system
https://microsoft.github.io/graphrag/
MIT License
20.04k stars 1.96k forks source link

[BUG]: Errors occurred during the pipeline run, see logs for more details. #583

Closed Jumbo-zczlbj0 closed 4 months ago

Jumbo-zczlbj0 commented 4 months ago

Describe the bug

"Sometimes, running the command python3 -m graphrag.index --root ./ragtest results in the error 'Errors occurred during the pipeline run, see logs for more details,' even though no configuration changes were made. This issue may occur after restarting the computer. I have tried deleting the original environment and creating a new one. Sometimes it works well, and sometimes it doesn't."

The previous model was running smoothly and had successfully answered my questions. And my Ollama is functioning properly and the model has been downloaded.

I am a beginner, so I might not understand everything fully. Please bear with me

Screenshot from 2024-07-16 04-19-19

settings.yaml: Screenshot from 2024-07-16 04-12-24 Screenshot from 2024-07-16 04-12-15

log: (ragtest/output/20240716-034934/reports/indexing-engine.log)

Screenshot from 2024-07-16 04-16-05

Screenshot from 2024-07-16 04-16-15

indexing-engine.log

Steps to reproduce

  1. Run Lm-studio

  2. chmod +x LM_Studio-0.2.27.AppImage

  3. python3 -m graphrag.index --root ./ragtest

Expected Behavior

I expect the LLM and embedding to process my data correctly

GraphRAG Config Used

No response

Logs and screenshots

No response

Additional Information

Jumbo-zczlbj0 commented 4 months ago
  1. /ragtest/.env:

GRAPHRAG_API_KEY=

  1. ragtest/output/20240716-035359/artifacts/stats.json:

Screenshot from 2024-07-16 04-25-14

myyourgit commented 4 months ago

To: Jumbo: how to run chmod +x LM_Studio-0.2.27.AppImage in windows 10?

I can not find LM_Studio-0.2.27.AppImage file in LM_Studio installation directory. If this is a *.exe file?

thanks

myyourgit commented 4 months ago

To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly.

And suggestion about setting.yaml?

in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest",

Thanks

Jumbo-zczlbj0 commented 4 months ago

To: Jumbo: how to run chmod +x LM_Studio-0.2.27.AppImage in windows 10?

I can not find LM_Studio-0.2.27.AppImage file in LM_Studio installation directory. If this is a *.exe file?

thanks

I use Lm_studio on an Ubuntu system. You can download the Windows version from the official website: https://lmstudio.ai

myyourgit commented 4 months ago

To: Jumbo: how to run chmod +x LM_Studio-0.2.27.AppImage in windows 10? I can not find LM_Studio-0.2.27.AppImage file in LM_Studio installation directory. If this is a *.exe file? thanks

I use Lm_studio on an Ubuntu system. You can download the Windows version from the official website: https://lmstudio.ai

I use windows10. In windows, I just run LM_Studio, I think this is OK.

Jumbo-zczlbj0 commented 4 months ago

To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly.

And suggestion about setting.yaml?

in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest",

Thanks

I don't know if you have installed Ollama. I referred to this video: https://youtu.be/BLyGDTNdad0?si=Pvx1og9aD_5fmZa3

Jumbo-zczlbj0 commented 4 months ago

I installed graphrag in Docker to avoid this bug.

I am using the official NVIDIA docker image (CUDA =12.2, Ubuntu22.04, devel)

By the way, LM_Studio can be replaced with llama.cpp: https://github.com/ggerganov/llama.cpp

myyourgit commented 4 months ago

To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly. And suggestion about setting.yaml? in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest", Thanks

I don't know if you have installed Ollama. I referred to this video: https://youtu.be/BLyGDTNdad0?si=Pvx1og9aD_5fmZa3

Hi,Jumbo: thanks.

in ollama directory, I run ollama pull gemma2:9b

ollama run gemma2:9b it works.

but the problem is that when running below command

curl http://localhost:11434/v1/chat/completions

the result is: 404 page not found

this means that gemma2 port is not recognized.

Jumbo-zczlbj0 commented 4 months ago

To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly. And suggestion about setting.yaml? in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest", Thanks

I don't know if you have installed Ollama. I referred to this video: https://youtu.be/BLyGDTNdad0?si=Pvx1og9aD_5fmZa3

Hi,Jumbo: thanks.

in ollama directory, I run ollama pull gemma2:9b

ollama run gemma2:9b it works.

but the problem is that when running below command

curl http://localhost:11434/v1/chat/completions

the result is: 404 page not found

this means that gemma2 port is not recognized.

Please refer to the official way:https://github.com/ollama/ollama/blob/main/docs/api.md

For example: curl http://localhost:11434/api/chat -d '{ "model": "gemm2:latest", "messages": [ { "role": "user", "content": "hi" } ] }'

IMG_7056

natoverse commented 4 months ago

Consolidating alternate model issues here: https://github.com/microsoft/graphrag/issues/657