Open parimalbera7551 opened 2 months ago
Did you look into the log file? in some cases, there might be timeout for calling the LLM. Also what llm and embedding model are you using?
In the log files shows Error 'Invoking LLM' i am using llama3
Try to change llm.model in settings.yaml. I use mistral-nemo and it's work. When i try gemma 2, i got same error.
Can you share your settings.yaml file?
On Fri, 30 Aug, 2024, 12:35β―am Samoylov Andrey, @.***> wrote:
Try to change llm.model in settings.yaml. I use mistral-nemo and it's work. When i try gemma 2, i got same error.
β Reply to this email directly, view it on GitHub https://github.com/karthik-codex/Autogen_GraphRAG_Ollama/issues/18#issuecomment-2318657606, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJSOI4BILVNIJRE22QUNISTZT5WJJAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJYGY2TONRQGY . You are receiving this because you authored the thread.Message ID: @.***>
Can you share the settings.yml file?
On Fri, 30 Aug, 2024, 12:55β―am PARIMAL BERA, @.***> wrote:
Can you share your settings.yaml file?
On Fri, 30 Aug, 2024, 12:35β―am Samoylov Andrey, @.***> wrote:
Try to change llm.model in settings.yaml. I use mistral-nemo and it's work. When i try gemma 2, i got same error.
β Reply to this email directly, view it on GitHub https://github.com/karthik-codex/Autogen_GraphRAG_Ollama/issues/18#issuecomment-2318657606, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJSOI4BILVNIJRE22QUNISTZT5WJJAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJYGY2TONRQGY . You are receiving this because you authored the thread.Message ID: @.***>
can you share it?
To avoid creating a new thread - I have the same issue with lots of "Error Invoking LLM" messages in the log. Yet the indexer completes with all green check marks.
llm:
model: mistral-nemo
embeddings:
model: nomic_embed_text
In the logs.json I see all of these calls have two reasons:
My understanding was that the indexer will call local (ollama) models - is this the case? What could be the reason for models to return such messages?
tell me what can i do?
On Mon, Oct 21, 2024 at 5:17β―PM Dmitry Kan @.***> wrote:
To avoid creating a new thread - I have the same issue with lots of "Error Invoking LLM" messages in the log. Yet the indexer completes with all green check marks.
llm: model: mistral-nemo
embeddings: model: nomic_embed_text
In the logs. json I see all of these calls have two reasons:
- "Request timed out."
- "Error code: 500 - {'error': {'message': 'unexpected server status: llm server loading model', 'type': 'api_error', 'param': None, 'code': None}}"
My understanding was that the indexer will call local (ollama) models - is this the case? What could be the reason for models to return such messages?
β Reply to this email directly, view it on GitHub https://github.com/karthik-codex/Autogen_GraphRAG_Ollama/issues/18#issuecomment-2426444064, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJSOI4DIVDUSMCGGFIDT3M3Z4TSTJAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMRWGQ2DIMBWGQ . You are receiving this because you modified the open/close state.Message ID: @.*** com>
Changing LLM to Llama3.1/Llama3.2:1B or mistral-nemo, as well as increasing request_timeout, solves the errors for me.
But when I run Chainlit and ask a question: """ Replying as User_Proxy. Provide feedback to chat_manager. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: """
If you share this file where it is mentioned llm type and increase request time .it is help full for me
On Tue, 22 Oct, 2024, 7:32β―pm SebastiΓ‘n FernΓ‘ndez GarcΓa, < @.***> wrote:
Changing LLM to Llama3.1/Llama3.2:1B or mistral-nemo, as well as increasing request_timeout, solves the errors for me.
But when I run Chainlit and ask a question: """ Replying as User_Proxy. Provide feedback to chat_manager. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: """
β Reply to this email directly, view it on GitHub https://github.com/karthik-codex/Autogen_GraphRAG_Ollama/issues/18#issuecomment-2429379275, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJSOI4G4WUUVPKBYCDC7M43Z4ZLIXAVCNFSM6AAAAABMYBPVP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMRZGM3TSMRXGU . You are receiving this because you modified the open/close state.Message ID: @.*** com>
when i run python -m graphrag.index --root . π create_base_extracted_entities entity_graph 0 <graphml xmlns="http://graphml.graphdrawing.or... π create_summarized_entities entity_graph 0 <graphml xmlns="http://graphml.graphdrawing.or... β create_base_entity_graph None β GraphRAG Indexer βββ Loading Input (InputFileType.text) - 1 files loaded (14 filtered) ββββββββββββββββββββββββββββββββββββββββ 100% 0:00:00 0:00:00 βββ create_base_text_units βββ create_base_extracted_entities βββ create_summarized_entities βββ create_base_entity_graph β Errors occurred during the pipeline run, see logs for more details.