-
### Feature Description
The most popular LLMs such as OpenAI support candidate generations which means to generate n responses for the same prompt. This feature can be used in RAG, evaluations and mo…
-
- This issue focuses on the technical courses we take about LLM, we'll put the paper part in
https://github.com/xp1632/DFKI_working_log/issues/70
---
1. **ChainForge** https://chainforge.ai/ …
-
**Describe the bug**
I want to use local llms to evaluate my rag app, I have tried Ollama and HuggingFace models but neither of them is working.
Ragas version: 0.1.11
Python version: 3.11.3
**…
-
[ ] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question.
**Your Question**
What is the use of docstore in TestsetGenerator. How i…
-
JudgeBench: A Benchmark for Evaluating LLM-based Judges
https://arxiv.org/abs/2410.12784
-
> Please provide us with the following information:
> ---------------------------------------------------------------
### This issue is for a: (mark with an `x`)
```
- [ X] bug report -> pleas…
-
Hello authors, thanks again for the excellent work. Say that I have a completed model ckpt and want to load it as:
model = LlavaLlamaForCausalLM.from_pretrained("./checkpoints/llava-v1.5-vicuna-13b-v…
-
[ ] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug.
**Describe the bug**
Error message is "Invalid n value (currently only n = 1 is su…
-
Evaluation failed: 'CustomOllama' object has no attribute 'set_run_config', what is the solution,
Ragas Version: 0.1.7
**Code Examples**
# Define a simple dataset using Pandas DataFrame
data…
-
ref
* [Handle large context windows using Ollama's LLMs for evaluation purpose · Issue #1120 · explodinggradients/ragas](https://github.com/explodinggradients/ragas/issues/1120)
feats
* check how g…