Closed 451222664 closed 1 week ago
Hi! Can you please check in your cache files or output files if the entity extraction was succesful? Most errors on the clustering step relate to faulty entity extractions, either by 0 extracted entities or by wrong responses from the ll..
It means that there is something wrong with the result of LLM processing, right?
"<|COMPLETE|>
Let me know if you'd like to try another example! I'm ready when you are."
This error should be caused by your embedding or model not loading correctly. You can refer to my configuration modification.
Hi @451222664
I am also getting same error!
Pasted the logs below, feels like an issue with ollama. Please confirm you are also getting same logs.
Hi @451222664 By the response provided, yup, the LLM you're using is ignoring the format we are looking for in the output and it is being more "chatty". I would suggest doing some prompt tuning to try to force the LLM into the format we need for parsing.
Hi @451222664
I am also getting same error!
Pasted the logs below, feels like an issue with ollama. Please confirm you are also getting same logs.
It turns out ollama was not started properly, restarting the service fixed the issue.
Hi! We are consolidating alternate model issues here: https://github.com/microsoft/graphrag/issues/657
This is my configuration:
This is my error log:
This is my console log: