karthik-codex / Autogen_GraphRAG_Ollama

Microsoft's GraphRAG + AutoGen + Ollama + Chainlit = Fully Local & Free Multi-Agent RAGΒ Superbot
514 stars 104 forks source link

error when # Create knowledge graph #18

Open parimalbera7551 opened 3 months ago

parimalbera7551 commented 3 months ago

when i run python -m graphrag.index --root . πŸš€ create_base_extracted_entities entity_graph 0 <graphml xmlns="http://graphml.graphdrawing.or... πŸš€ create_summarized_entities entity_graph 0 <graphml xmlns="http://graphml.graphdrawing.or... ❌ create_base_entity_graph None β ‹ GraphRAG Indexer β”œβ”€β”€ Loading Input (InputFileType.text) - 1 files loaded (14 filtered) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 β”œβ”€β”€ create_base_text_units β”œβ”€β”€ create_base_extracted_entities β”œβ”€β”€ create_summarized_entities └── create_base_entity_graph ❌ Errors occurred during the pipeline run, see logs for more details.

karthik-codex commented 3 months ago

Did you look into the log file? in some cases, there might be timeout for calling the LLM. Also what llm and embedding model are you using?

parimalbera7551 commented 3 months ago

In the log files shows Error 'Invoking LLM' i am using llama3

Drenjy commented 3 months ago

Try to change llm.model in settings.yaml. I use mistral-nemo and it's work. When i try gemma 2, i got same error.

parimalbera7551 commented 3 months ago

Can you share your settings.yaml file?

On Fri, 30 Aug, 2024, 12:35β€―am Samoylov Andrey, @.***> wrote:

Try to change llm.model in settings.yaml. I use mistral-nemo and it's work. When i try gemma 2, i got same error.

β€” Reply to this email directly, view it on GitHub https://github.com/karthik-codex/Autogen_GraphRAG_Ollama/issues/18#issuecomment-2318657606, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJSOI4BILVNIJRE22QUNISTZT5WJJAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJYGY2TONRQGY . You are receiving this because you authored the thread.Message ID: @.***>

parimalbera7551 commented 2 months ago

Can you share the settings.yml file?

On Fri, 30 Aug, 2024, 12:55β€―am PARIMAL BERA, @.***> wrote:

Can you share your settings.yaml file?

On Fri, 30 Aug, 2024, 12:35β€―am Samoylov Andrey, @.***> wrote:

Try to change llm.model in settings.yaml. I use mistral-nemo and it's work. When i try gemma 2, i got same error.

β€” Reply to this email directly, view it on GitHub https://github.com/karthik-codex/Autogen_GraphRAG_Ollama/issues/18#issuecomment-2318657606, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJSOI4BILVNIJRE22QUNISTZT5WJJAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJYGY2TONRQGY . You are receiving this because you authored the thread.Message ID: @.***>

parimalbera7551 commented 2 months ago

can you share it?

DmitryKey commented 1 month ago

To avoid creating a new thread - I have the same issue with lots of "Error Invoking LLM" messages in the log. Yet the indexer completes with all green check marks.

llm:
   model: mistral-nemo

embeddings:
   model: nomic_embed_text

In the logs.json I see all of these calls have two reasons:

  1. "Request timed out."
  2. "Error code: 500 - {'error': {'message': 'unexpected server status: llm server loading model', 'type': 'api_error', 'param': None, 'code': None}}"

My understanding was that the indexer will call local (ollama) models - is this the case? What could be the reason for models to return such messages?

parimalbera7551 commented 1 month ago

tell me what can i do?

On Mon, Oct 21, 2024 at 5:17β€―PM Dmitry Kan @.***> wrote:

To avoid creating a new thread - I have the same issue with lots of "Error Invoking LLM" messages in the log. Yet the indexer completes with all green check marks.

llm: model: mistral-nemo

embeddings: model: nomic_embed_text

In the logs. json I see all of these calls have two reasons:

  1. "Request timed out."
  2. "Error code: 500 - {'error': {'message': 'unexpected server status: llm server loading model', 'type': 'api_error', 'param': None, 'code': None}}"

My understanding was that the indexer will call local (ollama) models - is this the case? What could be the reason for models to return such messages?

β€” Reply to this email directly, view it on GitHub https://github.com/karthik-codex/Autogen_GraphRAG_Ollama/issues/18#issuecomment-2426444064, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJSOI4DIVDUSMCGGFIDT3M3Z4TSTJAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMRWGQ2DIMBWGQ . You are receiving this because you modified the open/close state.Message ID: @.*** com>

sebastianfernandezgarcia commented 1 month ago

Changing LLM to Llama3.1/Llama3.2:1B or mistral-nemo, as well as increasing request_timeout, solves the errors for me.

But when I run Chainlit and ask a question: """ Replying as User_Proxy. Provide feedback to chat_manager. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: """

parimalbera7551 commented 1 month ago

If you share this file where it is mentioned llm type and increase request time .it is help full for me

On Tue, 22 Oct, 2024, 7:32β€―pm SebastiΓ‘n FernΓ‘ndez GarcΓ­a, < @.***> wrote:

Changing LLM to Llama3.1/Llama3.2:1B or mistral-nemo, as well as increasing request_timeout, solves the errors for me.

But when I run Chainlit and ask a question: """ Replying as User_Proxy. Provide feedback to chat_manager. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: """

β€” Reply to this email directly, view it on GitHub https://github.com/karthik-codex/Autogen_GraphRAG_Ollama/issues/18#issuecomment-2429379275, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJSOI4G4WUUVPKBYCDC7M43Z4ZLIXAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMRZGM3TSMRXGU . You are receiving this because you modified the open/close state.Message ID: @.*** com>

parimalbera7551 commented 1 week ago

Can you share this?

On Wed, 23 Oct, 2024, 12:22β€―am PARIMAL BERA, @.***> wrote:

If you share this file where it is mentioned llm type and increase request time .it is help full for me

On Tue, 22 Oct, 2024, 7:32β€―pm SebastiΓ‘n FernΓ‘ndez GarcΓ­a, < @.***> wrote:

Changing LLM to Llama3.1/Llama3.2:1B or mistral-nemo, as well as increasing request_timeout, solves the errors for me.

But when I run Chainlit and ask a question: """ Replying as User_Proxy. Provide feedback to chat_manager. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: """

β€” Reply to this email directly, view it on GitHub https://github.com/karthik-codex/Autogen_GraphRAG_Ollama/issues/18#issuecomment-2429379275, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJSOI4G4WUUVPKBYCDC7M43Z4ZLIXAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMRZGM3TSMRXGU . You are receiving this because you modified the open/close state.Message ID: @.*** com>