run-llama / rags

Build ChatGPT over your data, all with natural language
MIT License
6.17k stars 629 forks source link

implement using llamacpp as LLM model #22

Open adeelhasan19 opened 9 months ago

adeelhasan19 commented 9 months ago

i am trying to implement using open source llm model with llamacpp but getting this error

"ValueError: Must pass in vector index for CondensePlusContextChatEngine." i am new to llamaindex also can anyone help me what exactly i need to configure in order to run the RAGs

jerryjliu commented 9 months ago

see our customization tutorial here (specifically the part about customizing LLMs): https://docs.llamaindex.ai/en/latest/getting_started/customization.html

also llms: https://docs.llamaindex.ai/en/latest/module_guides/models/llms.html

cocoa126 commented 9 months ago

Try ask chat-gpt 4.0

cocoa126 commented 9 months ago

i am trying to implement using open source llm model with llamacpp but getting this error

"ValueError: Must pass in vector index for CondensePlusContextChatEngine." i am new to llamaindex also can anyone help me what exactly i need to configure in order to run the RAGs

Try ask gpt4

khalilxg commented 8 months ago

@adeelhasan19 did u successfully loaded local llm with llamacpp ??