-
## Description
Is it possible to use Conversational Search (RAG) with a local LLM? The documentation suggests it is only possible with OpenAI and Cloudflare. I was wondering if any of the HuggingF…
-
### Problem & Motivation
There is a huge wave of interest around high accuracy Q&A, such as via Retrieval Augmented Generation (RAG). RAG accuracy is largely driven by how well vector search is abl…
-
How should Chat GPT etc be applied to Blockly context? Trying to get my head around what it means.
-
The `window.ai.rag` API enables web applications to perform Retrieval-Augmented Generation (RAG) directly in the browser. RAG combines the power of large language models with the ability to retrieve a…
-
When setting the model repo path as indicated in the documentation as such:
LLMWareConfig().set_home(".\\llm_models\\")
And confirming that it has worked by:
LLMWareConfig().get_model_repo_pa…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
Hello ! I m currently trying to use a opensource LLM for my RAG application using Llama…
-
source: https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
-
First, I'd like to express my appreciation for this excellent cookbook repository. It's an invaluable resource for demonstrating the effective integration of Qdrant with Ragas and language models, and…
donbr updated
2 weeks ago
-
in the demo llm-rag-chat-bot the listed step 01 data prep does not have all the imports hence it breaks. (example concurrent) also there is Data prep full under resources that doesn't have all the imp…