severian42 / GraphRAG-Local-UI

GraphRAG using Local LLMs - Features robust API and multiple apps for Indexing/Prompt Tuning/Query/Chat/Visualizing/Etc. This is meant to be the ultimate GraphRAG/KG local LLM app.
MIT License
1.46k stars 165 forks source link

ERROR error extracting graph #53

Open psmukhopadhyay opened 1 month ago

psmukhopadhyay commented 1 month ago

I am using a local machine with Ubuntu 22.04 and Python 3.10. Indexing leads to the following every time:

08:55:25,379 root ERROR error extracting graph
Traceback (most recent call last):
  File "/home/psm/.local/lib/python3.10/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
    yield
  File "/home/psm/.local/lib/python3.10/site-packages/httpx/_transports/default.py", line 373, in handle_async_request
    resp = await self._pool.handle_async_request(req)
  File "/home/psm/.local/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request
    raise exc from None
  File "/home/psm/.local/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request
    response = await connection.handle_async_request(
  File "/home/psm/.local/lib/python3.10/site-packages/httpcore/_async/connection.py", line 101, in handle_async_request
    return await self._connection.handle_async_request(request)
  File "/home/psm/.local/lib/python3.10/site-packages/httpcore/_async/http11.py", line 143, in handle_async_request
    raise exc
  File "/home/psm/.local/lib/python3.10/site-packages/httpcore/_async/http11.py", line 113, in handle_async_request
    ) = await self._receive_response_headers(**kwargs)
  File "/home/psm/.local/lib/python3.10/site-packages/httpcore/_async/http11.py", line 186, in _receive_response_headers
    event = await self._receive_event(timeout=timeout)
  File "/home/psm/.local/lib/python3.10/site-packages/httpcore/_async/http11.py", line 224, in _receive_event
    data = await self._network_stream.read(
  File "/home/psm/.local/lib/python3.10/site-packages/httpcore/_backends/anyio.py", line 32, in read
    with map_exceptions(exc_map):
  File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/home/psm/.local/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ReadTimeout

Please suggest how to overcome it. 
bestCMC commented 1 month ago

I have this problem too, do you have solved yet?

severian42 commented 1 month ago

This has to do with the async calls while indexing. The GraphRAG dir in the repo should have the correct edits, but just in case it doesn't you will want to update the file /Users/anima/GraphRAG-Ollama-UI-main/graphrag/graphrag/llm/openai/openai_completion_llm.py to await the async calls like so:

# Copyright (c) 2024 Microsoft Corporation.
# Licensed under the MIT License

"""A text-completion based LLM."""

import logging
from typing import Any, Dict
from typing_extensions import Unpack

from graphrag.llm.base import BaseLLM
from graphrag.llm.types import (
    CompletionInput,
    CompletionOutput,
    LLMInput,
)

from .openai_configuration import OpenAIConfiguration
from .types import OpenAIClientTypes
from .utils import get_completion_llm_args

log = logging.getLogger(__name__)

class OpenAICompletionLLM(BaseLLM[CompletionInput, CompletionOutput]):
    """A text-completion based LLM."""

    _client: OpenAIClientTypes
    _configuration: OpenAIConfiguration

    def __init__(self, client: OpenAIClientTypes, configuration: OpenAIConfiguration):
        super().__init__()
        self._client = client
        self._configuration = configuration

    async def _execute_llm(
        self,
        input: CompletionInput,
        **kwargs: Dict[str, Any]
    ) -> CompletionOutput | None:
        args = get_completion_llm_args(
            kwargs.get("model_parameters"), self._configuration
        )
        try:

            args.pop('model', None)
            completion = await self._client.completions.create(
                model=self._configuration.model,
                prompt=input,
                **args
            )
            return completion.choices[0].text
        except Exception as e:
            raise RuntimeError(f"Error calling OpenAI API: {str(e)}") from e

The main change being the completion = await self._client.completions.create section. Alos, just in case as well, make sure in the settings.yaml that your llm type and provider are named correctly like so:

llm:
  api_base: http://localhost:11434/v1
  api_key: ${GRAPHRAG_API_KEY}
  concurrent_requests: 25
  max_retries: 3
  max_tokens: 1024
  model: mistral-nemo:12b-instruct-2407-fp16
  model_supports_json: true
  provider: openai_chat
  temperature: 0.5
  type: openai_chat