TheAiSingularity / graphrag-local-ollama

Local models support for Microsoft's graphrag using ollama (llama3, mistral, gemma2 phi3)- LLM & Embedding extraction
MIT License
778 stars 116 forks source link

Taking alot of time #63

Closed Ayush-Sharma410 closed 2 months ago

Ayush-Sharma410 commented 2 months ago

I am using LLaMa3.1 and nomic-embed-text and It's taking around 45 minutes. I have only 1 text file as input which has around 110 lines. But when i used OpenAI model as described in the original microsft GraphRag, it is taking not more than 15 sec for building the pipeline. Why is there such a huge difference. Is there any way I can reduce the time for compuation? image This is a screenshot of what my text file looks like.