Open PavanMahajan777 opened 2 months ago
I also got nan values when using the gpt metrics. Try changing those metrics and use the local custom metrics. Go into your example_config.json file and replace requested_metrics with : "requested_metrics": ["groundedness", "relevance", "coherence", "answer_length", "latency"] This solution worked for me
question="What is the capital of France?", context="France is in Europe and it's capital is Paris.", answer="Paris is the capital of France." truth = "Paris"
results = evaluate( target=wrap_target, data=testdata, task_type="qa", metrics_list=["gpt_groundedness","gpt_relevance","gpt_coherence","gpt_fluency","gpt_similarity", "hate_unfairness", "sexual", "violence", "self_harm"], model_config= model_config data_mapping={ "question": "question", "context": "context", "answer": "answer", }, tracking=False, output_path="./")
This issue is for a: (mark with an
x
)Minimal steps to reproduce
Any log messages given by the failure
Expected/desired behavior
OS and Version?
Versions
Mention any other details that might be useful