microsoft / graphrag

A modular graph-based Retrieval-Augmented Generation (RAG) system
https://microsoft.github.io/graphrag/
MIT License
20.06k stars 1.96k forks source link

[Issue]: <title> I resolved the Error Invoking LLM. #747

Closed peixikk closed 4 months ago

peixikk commented 4 months ago

Is there an existing issue for this?

Describe the issue

Here’s the translation of your text into English:


Traceback (most recent call last): File "/kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py", line 55, in _invoke output = await self._execute_llm(input, **kwargs) I encountered this issue and tried many methods without success. I accidentally ran the command !ollama run llama3.1:8b-instruct-q8_0 "你是" and discovered that the problem was due to ollama serve not being started.

In /kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py, inside async def _invoke(self, input: TIn, **kwargs: Unpack[LLMInput]) -> LLMOutput[TOut]:, add the following functions:

def is_process_running(process_name):
    # Iterate over all running processes
    for proc in psutil.process_iter(['pid', 'name']):
        if process_name.lower() in proc.info['name'].lower():
            return True
    return False

def start_ollama():
    command = "nohup ollama serve &"
    process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    time.sleep(5)
    return process.pid

Alternatively, restart ollama serve when the traceback occurs:

Traceback (most recent call last):
  File "/kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py", line 55, in _invoke
    output = await self._execute_llm(input, **kwargs)

Traceback (most recent call last): File \"/kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py\", line 55, in _invoke output = await self._execute_llm(input, kwargs) 出现这种问题,我试了很多方法都不行,我偶然点击!ollama run llama3.1:8b-instruct-q8_0 "你是"这个命令,才发现是ollama serve没有启动. 在/kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py,在async def _invoke(self, input: TIn, kwargs: Unpack[LLMInput]) -> LLMOutput[TOut]: try: 加入def is_process_running(process_name):

Iterate over all running processes

            for proc in psutil.process_iter(['pid', 'name']):
                if process_name.lower() in proc.info['name'].lower():
                    return True
            return False

        def start_ollama():
            command = "nohup ollama serve &"
            process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
            time.sleep(5)
            return process.pid

或者在发生Traceback (most recent call last): File \"/kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py\", line 55, in _invoke output = await self._execute_llm(input, **kwargs) 的时候,再次启动ollama serve

Steps to reproduce

No response

GraphRAG Config Used

No response

Logs and screenshots

No response

Additional Information

No response

natoverse commented 4 months ago

Linking to #657 and marking with community_support so others can find this related to ollama use