-
When running two pipelines and comparing their results to each other, I would like to see the predicted answers of each pipeline run in the resulting pandas dataframe.
Here is an example of how this …
-
- **Package Name**: azure-ai-generative
- **Package Version**: 1.0.0b2
- **Operating System**: Mac M1
- **Python Version**: 3.11
**Describe the bug**
When calling evaluate(), we see this in t…
-
[X] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug.
**Describe the bug**
ValueError: Unknown format code 'f' for object of type 'str'
…
-
### What is the issue?
I am doing some benchmarks on RAG using llama3:7b model on Ollama.
I ask a question first directly to the model, then ask the question and provide context from relevant docu…
-
Hi, Deepak Dev this side. We have a RAG model, Just wanted to know how can I use RL prompt for same. One more thing I wanted to know - is it only for evaluation purpose or we can train our own model f…
-
**Describe the Feature**
Per the deprecation message I receive in v0.1.7:
```
passing column names as 'ground_truths' is deprecated and will be removed in the next version,
please use 'ground_…
-
请问这部分代码计划什么时候更新呢?很想在生物问题上试试。
-
Grok-1
https://github.com/xai-org/grok-1
Mistral 7B base v0.2
https://twitter.com/MistralAILabs/status/1771670765521281370
NVIDIA reveals Blackwell
https://nvidianews.nvidia.com/news/nvidia-b…
-
Running the app takes some time to load the model into memory , and since we're using quantized version, llm.to('cuda') is not made use of.
The answers from the RAG are pretty decent given that t…
-
### 🚀 The feature, motivation and pitch
Anthropic directly states that their models prefer context for longer prompts (like the usual RAG applications) to be inserted in XML tags. Some claim OpenAI's…