Open Senthselvi opened 2 months ago
Hey @Senthselvi! I'm here to help you with any issues you have. I can help you solve bugs, answer questions, and even become a contributor. Let me know how I can assist you!
The correct code for importing and using the evaluate
function from ragas.integrations.llama_index
is:
from ragas.integrations.llama_index import evaluate
# Example usage
result = evaluate(
query_engine=query_engine,
metrics=metrics,
dataset=ds_dict,
llm=evaluator_llm,
embeddings=OpenAIEmbedding(),
)
This code snippet imports the evaluate
function and demonstrates how to call it with the necessary parameters, including query_engine
, metrics
, dataset
, llm
, and embeddings
[1][2].
To continue talking to Dosu, mention @dosu.
what is the correct code?
You can directly import the evaluate
from the ragas library.
from ragas import evaluate
from ragas.metrics import faithfulness, answer_relevancy
result = evaluate(
dataset, metrics = [faithfulness,answer_relevancy], llm =llm_model, embeddings=embeddings,raise_exceptions=False
)
result.to_pandas()
I am getting the below error, please help in embedding_factory openai_embeddings = OpenAIEmbeddings() ^^^^^^^^^^^^^^^^^^ File "C:\TestingBot\Scripts\myenv\Lib\site-packages\ragas\embeddings\base.py", line 38, in init super(BaseOpenAIEmbeddings, self).init(openai_api_key=openai_api_key) File "C:\TestingBot\Scripts\myenv\Lib\site-packages\pydantic\v1\main.py", line 341, in init raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for OpenAIEmbeddings openai_api_key extra fields not permitted (type=value_error.extra)
I am getting the below error, please help in embedding_factory openai_embeddings = OpenAIEmbeddings() ^^^^^^^^^^^^^^^^^^ File "C:\TestingBot\Scripts\myenv\Lib\site-packages\ragas\embeddings\base.py", line 38, in init super(BaseOpenAIEmbeddings, self).init(openai_api_key=openai_api_key) File "C:\TestingBot\Scripts\myenv\Lib\site-packages\pydantic\v1\main.py", line 341, in init raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for OpenAIEmbeddings openai_api_key extra fields not permitted (type=value_error.extra)
Probably you would be missing openai api key declaration,
import os
os.environ["OPENAI_API_KEY"] = "your-openai-key"
can refer this:- https://github.com/rajshah4/LLM-Evaluation/blob/main/ragas_quickstart.ipynb
resolved, thanks
can you share code with baseurl and bearerkey and not OPENAI_API_KEY
what is the correct version for python, ragas and llamaindex?
from ragas import evaluate from ragas.metrics import faithfulness, answer_relevancy, context_precision Throwing eror in the editor. no results
does raga evaluates custom ollama?
Hey this code throws the error Dataset preview: Dataset({ features: ['question', 'answer', 'ground_truth', 'contexts'], num_rows: 2 }) Dataset after adding ground_truth: Dataset({ features: ['question', 'answer', 'ground_truth', 'contexts'], num_rows: 2 }) Evaluating: 0%| | 0/4 [00:00<?, ?it/s]Exception raised in Job[0]: TypeError(Object of type PromptValue is not JSON serializable)
An error occurred: Object of type PromptValue is not JSON serializable Exception raised in Job[2]: TypeError(Object of type PromptValue is not JSON serializable) I am using ragas with custom llm and embedings Can you help ?
def embed(self, prompts: List[str]) -> List[List[float]]: embeddings = []
for prompt in prompts:
payload = {"model": self.model_name, "prompt": prompt}
embed_url = f"{self.base_url}/api/embeddings"
try:
with self._get_client() as client:
response = client.post(embed_url, json=payload)
response.raise_for_status()
response_json = response.json()
if "embedding" in response_json or "embeddings" in response_json:
embedding = response_json.get(
"embedding", response_json.get("embeddings", [])
)
embeddings.append(embedding)
else:
print("Unexpected response format:", response_json)
embeddings.append([])
except httpx.RequestError as e:
print(f"Request Error: {e}")
embeddings.append([])
except httpx.HTTPStatusError as e:
print(f"HTTP Status Error: {e}")
embeddings.append([])
return embeddings
Any help?
data = { "question": ["What is the capital of India?", "What is the largest ocean on Earth?"], "answer": ["Delhi", "Pacific Ocean"], "ground_truth": ["Delhi", "Pacific Ocean"], # Flattened to strings "contexts": [["Delhi"], ["Earth"]], # Ensure contexts is a list of lists of strings 'faithfulness': [np.nan, np.nan], 'answer_relevancy': [np.nan, np.nan] }
def evaluate_chat_performance(data, llm, embedding_model): dataset = Dataset.from_dict(data) print("Dataset preview:", dataset)
if "ground_truth" not in dataset.column_names: dataset = dataset.add_column( name="ground_truth", column=dataset["ground_truth"], new_fingerprint=str(uuid4()), )
print("Dataset after adding ground_truth:", dataset)
metrics = [ faithfulness,context_precision
]try: results = evaluate( dataset=dataset, metrics=metrics,
llm=llm,
embeddings=embedding_model,
raise_exceptions=False,
)
except Exception as e: print("Evaluation failed:", e) return
print(results.to_pandas()) results.to_pandas().to_csv(r'C:\TestingBot\Scripts\V1\myfile.csv', sep=',') return results Call the function print(evaluate_chat_performance(data, llm, embedding_model))
Guardrails validation failed on user request: 1 validation error for CallInputs\nllm_output\n Input should be a valid string [type=string_type, input_value={'prompt_str': 'Given a q...nces: ""\nanalysis: \n'}, input_type=dict]\n For further information visit https://errors.pydantic.dev/2.7/v/string_type' --error. WHat is the solution?
Guardrails validation failed on user request: 1 validation error for CallInputs\nllm_output\n Input should be a valid string [type=string_type, input_value={'prompt_str': 'Given a q...nces: ""\nanalysis: \n'}, input_type=dict]\n For further information visit https://errors.pydantic.dev/2.7/v/string_type' --error. WHat is the solution?
what is the correct code?