-
### Feature request
I am in the process of developing an open-source RAG pipeline, utilizing Olmo 7b for the task at hand. However, I've encountered some GPU limitations, prompting me to implement Qu…
-
**Describe the solution you'd like**
Collecting data from a wide range on docs and giving relevant info to a LLM is one of the most common ways to use RAG.
There is 1 point wich would improve the ex…
-
### Summary
This issue is a list of enhancements aimed at improving tracing in the APM UI, particularly for RAG applications. The need for these improvements has emerged as the Security team has been…
-
If the llama3 from ollama is running on http://8.140.18.**:28275, the following code from 60th example runs fine.
```
from txtai.pipeline import LLM
llm = LLM("ollama/llama3", method="litellm", a…
-
## Summary
This template is intended to capture a few base requirements that are needed to be met prior to filing a PR that contains a new blog post submission.
Please fill out this form in its…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
Hello everyone,
I've been implementing a RAG system using Llama-Index and open-source…
-
Hi,
Does anyone have an example of how to get the models + adapters running in a RAG pipeline using the LlamaIndex or Langchain framework? I want to try to use the [RAG Fusion retriever](https://do…
-
Would love to see some kind of RAG and UI implemented. Thank you
-
## RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture
Link: https://arxiv.org/abs/2401.08406
Start reading it. In my understang, PEFT is not for knowldge, but for format. I …
-
When running two pipelines and comparing their results to each other, I would like to see the predicted answers of each pipeline run in the resulting pandas dataframe.
Here is an example of how this …