[X] I have checked #657 to validate if my issue is covered by community support
Describe the issue
Here’s the translation of your text into English:
Traceback (most recent call last):
File "/kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py", line 55, in _invoke
output = await self._execute_llm(input, **kwargs)
I encountered this issue and tried many methods without success. I accidentally ran the command !ollama run llama3.1:8b-instruct-q8_0 "你是" and discovered that the problem was due to ollama serve not being started.
In /kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py, inside async def _invoke(self, input: TIn, **kwargs: Unpack[LLMInput]) -> LLMOutput[TOut]:, add the following functions:
def is_process_running(process_name):
# Iterate over all running processes
for proc in psutil.process_iter(['pid', 'name']):
if process_name.lower() in proc.info['name'].lower():
return True
return False
def start_ollama():
command = "nohup ollama serve &"
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
time.sleep(5)
return process.pid
Alternatively, restart ollama serve when the traceback occurs:
Traceback (most recent call last):
File "/kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py", line 55, in _invoke
output = await self._execute_llm(input, **kwargs)
Is there an existing issue for this?
Describe the issue
Here’s the translation of your text into English:
Traceback (most recent call last): File "/kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py", line 55, in _invoke output = await self._execute_llm(input, **kwargs) I encountered this issue and tried many methods without success. I accidentally ran the command
!ollama run llama3.1:8b-instruct-q8_0 "你是"
and discovered that the problem was due toollama serve
not being started.In
/kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py
, insideasync def _invoke(self, input: TIn, **kwargs: Unpack[LLMInput]) -> LLMOutput[TOut]:
, add the following functions:Alternatively, restart
ollama serve
when the traceback occurs:Traceback (most recent call last): File \"/kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py\", line 55, in _invoke output = await self._execute_llm(input, kwargs) 出现这种问题,我试了很多方法都不行,我偶然点击!ollama run llama3.1:8b-instruct-q8_0 "你是"这个命令,才发现是ollama serve没有启动. 在/kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py,在async def _invoke(self, input: TIn, kwargs: Unpack[LLMInput]) -> LLMOutput[TOut]: try: 加入def is_process_running(process_name):
Iterate over all running processes
或者在发生Traceback (most recent call last): File \"/kaggle/working/graphrag-local-ollama/graphrag/llm/base/base_llm.py\", line 55, in _invoke output = await self._execute_llm(input, **kwargs) 的时候,再次启动ollama serve
Steps to reproduce
No response
GraphRAG Config Used
No response
Logs and screenshots
No response
Additional Information
No response