I am using LLaMa3.1 and nomic-embed-text and It's taking around 45 minutes.
I have only 1 text file as input which has around 110 lines. But when i used OpenAI model as described in the original microsft GraphRag, it is taking not more than 15 sec for building the pipeline. Why is there such a huge difference. Is there any way I can reduce the time for compuation?
This is a screenshot of what my text file looks like.
I am using LLaMa3.1 and nomic-embed-text and It's taking around 45 minutes. I have only 1 text file as input which has around 110 lines. But when i used OpenAI model as described in the original microsft GraphRag, it is taking not more than 15 sec for building the pipeline. Why is there such a huge difference. Is there any way I can reduce the time for compuation? This is a screenshot of what my text file looks like.