-
### What happened?
It seems llama-cli version 3952 doesn't generate any text response when the "--log-disable" parameter is set.
llama-cli version 3541 returns a text response regardless of the t…
-
While running this example:
```
$ cd TensorRT-Model-Optimizer/llm_ptq
$ scripts/huggingface_example.sh --type llama --model $model --quant fp8 --tp 2
```
there was a non-fatal failure:
```
[8ad0971d…
-
[ ] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug.
**Describe the bug**
I am using langchain for my agent. I have been able to implem…
-
**Is your feature request related to a problem? Please describe.**
Sometimes prompts and inputs result in unpredictable LLM behaviour, especially at higher temperatures. This means that both the LLM …
-
**Describe the issue**
When we try to launch LLM generations on APIBench, if the generation is interrupted midway, currently the partial file is deleted and restart generation from start. What we sh…
-
Title.
Idea gotten from:
Escalating Search Methods depending on likelihood of successful retrieval + Eval of returned information for 'reliability'
https://pub.towardsai.net/reliable-agentic-rag-…
-
We currently have *no specialized solution whatsover* for running LLM evals in the background.
After receiving the request, we simply start to iterate over the examples to annotate on the backend. …
-
I have down model weights in my computer,but i don't how to use LOCAL LLMS and Embeddings for ragas according to
Here is My code but it did't work
```
import typing as t
import asyncio
from ty…
-
- [ ] [[2303.16634] G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment](https://arxiv.org/abs/2303.16634)
# [2303.16634] G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment
…
-
Here is my code:
```
import typing as t
import asyncio
from typing import List
from datasets import load_dataset, load_from_disk
from ragas.metrics import faithfulness, context_recall, context_p…