Open Manel-Hik opened 3 weeks ago
hey @Manel-Hik, the issue is that you are wrapping the LLMs twice
embedding_url = "my_emebdding_url"
embeddings = HuggingFaceEndpointEmbeddings(model=embedding_url, huggingfacehub_api_token="my_token")
hf_model_url= "mylmodelurl"
llm = OpenAI(
base_url=hf_model_url,
api_key="nokey",
top_p=0.9
)
from ragas.metrics import (
context_precision,
faithfulness,
context_recall,
)
from ragas.metrics.critique import harmfulness
from ragas.testset.generator import TestsetGenerator
for document in docs:
document.metadata['filename'] = document.metadata['source']
from ragas.testset.generator import TestsetGenerator
from ragas.testset.evolutions import simple, reasoning, multi_context
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
generator = TestsetGenerator.from_langchain(
generator_llm=llm
critic_llm=llm
embeddings=embeddings
)
testset = generator.generate_with_langchain_docs(docs[:9],
test_size=10,
distributions={simple: 0.5, reasoning: 0.25, multi_context: 0.25},
raise_exceptions=False)
can you try this?
[ ] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug I'm trying to generate a test set using my model llama3 from HF (already hosted throught tgi) and embedding also from HF, from a local pdf document
Code to Reproduce
Error trace
Ragas version: 0.1.9 Python version: 3.12.2 langchain-core : 0.2.5
Could you help me in this please
Thanks in advance