TheAiSingularity / graphrag-local-ollama

Local models support for Microsoft's graphrag using ollama (llama3, mistral, gemma2 phi3)- LLM & Embedding extraction
MIT License
662 stars 95 forks source link

Taking alot of time #63

Closed Ayush-Sharma410 closed 3 weeks ago

Ayush-Sharma410 commented 3 weeks ago

I am using LLaMa3.1 and nomic-embed-text and It's taking around 45 minutes. I have only 1 text file as input which has around 110 lines. But when i used OpenAI model as described in the original microsft GraphRag, it is taking not more than 15 sec for building the pipeline. Why is there such a huge difference. Is there any way I can reduce the time for compuation? image This is a screenshot of what my text file looks like.