karthik-codex / Autogen_GraphRAG_Ollama

Microsoft's GraphRAG + AutoGen + Ollama + Chainlit = Fully Local & Free Multi-Agent RAG Superbot
391 stars 76 forks source link

Why do you seperately adopt mistral for GraphRAG inference and llama3 for Autogen inference? #15

Open fishfree opened 1 month ago

fishfree commented 1 month ago

What are the benefits? My GPU/CPU resource is very low.

karthik-codex commented 4 weeks ago

The GraphRAG indexing needed atleast 32k context length, so I choose mistral. You can try using the latest llama3.1 which I believe has 128k context length. So it should be good for both GraphRAG and AutoGen and save you resources.