Hi team, thanks for open source this awesome tool. I am new to the tool and try to ask some questions on LLM evaluation
Seems evaluate already create some evaluators (Some libs call it tasks I think). Can we use these evaluator for LLM evaluation?
I feel different tasks required different datasets. for LLM evaluation, there're popular datasets like MMLU. I am trying to ask Is there tested paring? for example, for QA, I can use dataset1, dataset2 for metric1, metric2 evaluation etc
Hi team, thanks for open source this awesome tool. I am new to the tool and try to ask some questions on LLM evaluation
evaluate
already create some evaluators (Some libs call it tasks I think). Can we use these evaluator for LLM evaluation?