-
#My code is
from datasets import Dataset
from ragas.llms import LangchainLLMWrapper
from langchain_community.embeddings import SparkLLMTextEmbeddings, HuggingFaceEmbeddings
#from langchain_commu…
-
[ ] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question.
** Facing error with using Langchain wrapped hugging face models**
I am …
-
[ ] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug.
**Describe the bug**
I am using langchain for my agent. I have been able to implem…
-
My code:
```
import typing as t
import asyncio
from typing import List
from datasets import load_dataset, load_from_disk
from ragas.metrics import faithfulness, context_recall, context_precisi…
-
[ ] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug.
Hi experts,
I use the answer_correctness as the metric, but it failed due to **A…
-
当运行sh train.sh pre_train.py时候,我采用8卡来运行脚本,但出现
Saving model checkpoint to ./model_save/pre/tmp-checkpoint-50
Configuration saved in ./model_save/pre/tmp-checkpoint-50/config.json
Configuration saved …
-
Hey @SuvodipDey,
How should I generate the input_file.json to evaluate my model? I want to use a direct huggingface path like: `meta-llama/Llama-3.1-8B`.
Maybe @yxc-cyber got the scripts for g…
-
Here is my code:
```
import typing as t
import asyncio
from typing import List
from datasets import load_dataset, load_from_disk
from ragas.metrics import faithfulness, context_recall, context_p…
-
Estimate key LLM metrics:
- Overall quality score, accuracy
- Hallucination rate (hallucination detection)
- Relevancy
- Coherence
- Responsible AI violations
- Safety
-
# Project Overview
This project focuses on designing a system that uses Retrieval-Augmented Generation (RAG) to create personalized summaries of memoirs and life stories. The system will generate e…