-
### Feature Description
The RAG Evaluator Pack is extremely helpful, but it does not provide an option to use an embedding model of our choice, instead it is forcefully using OpenAIEmbeddings. A wide…
-
**Describe the bug**
I create a flow to do RAG using Azure AI Search, but I got this following error during an evaluation:
```
SystemError: Unexpected error occurred while executing the batch run. …
-
I have fine-tuned the model with LoRA implementation, and after model saving, I and only the adapter_config.json file.
Is this correct way to evaluate the fine-tuned model?
I got some result but…
-
Hi, curious if there are any plans to support evaluating context along with question and reference answer?
-
**Problem Description:**
There are cases where the AI critique evaluator provides a response that includes both a numerical rating and a tex, such as "8 - The output is accurate, well-written, and co…
-
You will see the problem in the text below, this is with using gpt-4o and version 0.5 of agent zero, but have similar issues with other models
User message ('e' to leave):
> Write a college level …
-
[ x] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question.
**Your Question**
Contradiction in evaluate is_async parameter. By defa…
-
- [ ] [cohereai_classify table | CohereAI plugin | Steampipe Hub](https://hub.steampipe.io/plugins/mr-destructive/cohereai/tables/cohereai_classify)
# TITLE: cohereai_classify table | CohereAI plugi…
-
It would be beneficial to have an evaluation metric that measures the improvement brought by the RAG. This metric should perform the following:
1. Calculate the distance between the RAG-generated…
-
Currently, we do not have too many ways to evaluate the accuracy of RAG response. So, we need to implement an evaluation framework to help us to do this.
Reference:
* https://www.kaggle.com/code/a…