Open psmukhopadhyay opened 1 month ago
I have this problem too, do you have solved yet?
This has to do with the async calls while indexing. The GraphRAG dir in the repo should have the correct edits, but just in case it doesn't you will want to update the file /Users/anima/GraphRAG-Ollama-UI-main/graphrag/graphrag/llm/openai/openai_completion_llm.py to await the async calls like so:
# Copyright (c) 2024 Microsoft Corporation.
# Licensed under the MIT License
"""A text-completion based LLM."""
import logging
from typing import Any, Dict
from typing_extensions import Unpack
from graphrag.llm.base import BaseLLM
from graphrag.llm.types import (
CompletionInput,
CompletionOutput,
LLMInput,
)
from .openai_configuration import OpenAIConfiguration
from .types import OpenAIClientTypes
from .utils import get_completion_llm_args
log = logging.getLogger(__name__)
class OpenAICompletionLLM(BaseLLM[CompletionInput, CompletionOutput]):
"""A text-completion based LLM."""
_client: OpenAIClientTypes
_configuration: OpenAIConfiguration
def __init__(self, client: OpenAIClientTypes, configuration: OpenAIConfiguration):
super().__init__()
self._client = client
self._configuration = configuration
async def _execute_llm(
self,
input: CompletionInput,
**kwargs: Dict[str, Any]
) -> CompletionOutput | None:
args = get_completion_llm_args(
kwargs.get("model_parameters"), self._configuration
)
try:
args.pop('model', None)
completion = await self._client.completions.create(
model=self._configuration.model,
prompt=input,
**args
)
return completion.choices[0].text
except Exception as e:
raise RuntimeError(f"Error calling OpenAI API: {str(e)}") from e
The main change being the completion = await self._client.completions.create
section. Alos, just in case as well, make sure in the settings.yaml that your llm type
and provider
are named correctly like so:
llm:
api_base: http://localhost:11434/v1
api_key: ${GRAPHRAG_API_KEY}
concurrent_requests: 25
max_retries: 3
max_tokens: 1024
model: mistral-nemo:12b-instruct-2407-fp16
model_supports_json: true
provider: openai_chat
temperature: 0.5
type: openai_chat
I am using a local machine with Ubuntu 22.04 and Python 3.10. Indexing leads to the following every time: