TheAiSingularity / graphrag-local-ollama

Local models support for Microsoft's graphrag using ollama (llama3, mistral, gemma2 phi3)- LLM & Embedding extraction
MIT License
580 stars 79 forks source link

no ollama found in open_chat_llm.py #16

Open 652994331 opened 1 month ago

652994331 commented 1 month ago

Hi, thanks for. sharing such a good work. To my understanding , The llms are used to extract entities . I checked this code:https://github.com/TheAiSingularity/graphrag-local-ollama/blob/main/graphrag/llm/openai/openai_chat_llm.py. Though you put openai_chat as the type and mistral from ollama as the model in settings.yaml, I didn't find import ollama in open_chat_llm.py. I am curious how did you use LLM from ollama to do the question answering and entities extracting parts? thanks for your help

yeahdongcn commented 1 month ago

pip install ollama in your conda env.