explodinggradients / ragas

Supercharge Your LLM Application Evaluations 🚀
https://docs.ragas.io
Apache License 2.0
7.09k stars 719 forks source link

Ragas using llamaindex #1245

Open Senthselvi opened 2 months ago

Senthselvi commented 2 months ago

Evaluation failed: 'CustomOllama' object has no attribute 'set_run_config', what is the solution,

Ragas Version: 0.1.7

Code Examples

Define a simple dataset using Pandas DataFrame

data = { 'question': ['When was the first super bowl?', 'Who won the most super bowls?'], 'answer': ['The first superbowl was held on Jan 15, 1967', 'The most super bowls have been won by The New England Patriots'], 'contexts' : [['The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles,'], ['The Green Bay Packers...Green Bay, Wisconsin.','The Packers compete...Football Conference']], 'ground_truth': ['The first superbowl was held on January 15, 1967', 'The New England Patriots have won the Super Bowl a record six times'] }

def evaluate_chat_performance(data, llm, embedding_model): dataset = Dataset.from_dict(data) print("Dataset preview:", dataset)

df = pd.DataFrame(data)

if "ground_truth" not in dataset.column_names:
    dataset = dataset.add_column(
        name="ground_truth",
        column=dataset["ground_truth"],
        new_fingerprint=str(uuid4()),
    )

print("Dataset after adding ground_truth:", dataset)

metrics = [
    faithfulness,context_precision,answer_relevancy,
    # Using AnswerRelevancy instance
]

try:
    results = evaluate(
        dataset=dataset,
        metrics=metrics,
        llm=llm,
        embeddings=embedding_model,
        raise_exceptions=False,

    )
except Exception as e:
    print("Evaluation failed:", e)
    return

print(results.to_pandas())
results.to_pandas().to_csv(r'C:\TestingBot\Scripts\V1\myfile.csv', sep=',')
return results
dosubot[bot] commented 2 months ago

To resolve the 'CustomOllama' object has no attribute 'set_run_config' error in Ragas version 0.1.7, you need to ensure that the CustomOllama class has a set_run_config method implemented. Here is an example of how you can add this method to your CustomOllama class:

class CustomOllama(BaseRagasLLM):
    # Your existing implementation

    def set_run_config(self, run_config: RunConfig):
        # Implement the method to handle the run configuration
        self.run_config = run_config

Make sure to replace BaseRagasLLM with the appropriate base class if CustomOllama does not directly inherit from it. This should resolve the attribute error by ensuring that CustomOllama has the set_run_config method [1].

To continue talking to Dosu, mention @dosu.

Senthselvi commented 2 months ago

now this error, TypeError: CustomOllama.generate_text() missing 1 required positional argument: 'messages'

dosubot[bot] commented 2 months ago

To resolve the TypeError: CustomOllama.generate_text() missing 1 required positional argument: 'messages', you need to ensure that you are passing the required messages argument when calling the generate_text method. The generate_text method in the BaseRagasLLM class and its subclasses expects a prompt argument of type PromptValue.

Here is an example of how you should call the generate_text method:

# Assuming you have an instance of a class that inherits from BaseRagasLLM
llm_instance = CustomOllama()

# Create a PromptValue instance
prompt_value = PromptValue("Your prompt text here")

# Call the generate_text method with the required arguments
result = llm_instance.generate_text(prompt=prompt_value)

Make sure that prompt_value is an instance of PromptValue and that you pass it as the prompt argument to the generate_text method [1].

To continue talking to Dosu, mention @dosu.

Senthselvi commented 1 month ago

TypeError: CustomOllama.generate() got an unexpected keyword argument 'n' TypeError: CustomOllama.generate() got an unexpected keyword argument 'callbacks'

Define your custom models

llm = CustomOllama( model="llama3.1:8b", request_timeout=30.0, base_url=base_url, headers={"Authorization": f"Bearer {bearer_token}"}, )

embed_model = CustomEmbedding( model="mxbai-embed-large:latest", request_timeout=30.0, base_url=base_url, headers={"Authorization": f"Bearer {bearer_token}"} )

data_samples = { 'question': ['When was the first super bowl?', 'Who won the most super bowls?'], 'answer': ['The first superbowl was held on Jan 15, 1967', 'The most super bowls have been won by The New England Patriots'], 'contexts' : [['The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles,'], ['The Green Bay Packers...Green Bay, Wisconsin.','The Packers compete...Football Conference']], 'ground_truth': ['The first superbowl was held on January 15, 1967', 'The New England Patriots have won the Super Bowl a record six times'] }

dataset = Dataset.from_dict(data_samples)

amnesty_qa["eval"],

result = evaluate( dataset, metrics=[context_recall, context_precision,faithfulness], llm=llm, embeddings=embed_model, raise_exceptions=True )

df = result.to_pandas() print(df.head()) df.to_csv("result.csv", index=False)

Senthselvi commented 1 month ago

TypeError: CustomOllama.generate() got an unexpected keyword argument 'callbacks'? solution

dosubot[bot] commented 1 month ago

To resolve the TypeError: CustomOllama.generate() got an unexpected keyword argument 'callbacks' error, you need to ensure that the generate method in the CustomOllama class accepts the callbacks keyword argument.

Here is an example of how you might modify the CustomOllama class:

class CustomOllama(BaseRagasLLM):
    async def generate(self, prompt_value, n=1, temperature=None, stop=None, callbacks=None, is_async=True):
        # Your implementation here
        pass

Ensure that the generate method in CustomOllama matches the signature expected by the BaseRagasLLM class and its subclasses. This includes parameters like prompt_value, n, temperature, stop, callbacks, and is_async [1][2].

To continue talking to Dosu, mention @dosu.

jjmachan commented 1 month ago

@Senthselvi how did you define the llm and embedding originally? I think this could be a problem with wrapping the LLM.