Closed NimaBoscarino closed 2 years ago
Thanks for reporting! The lewtun/autoevaluate__xxx
datasets were dummy ones from early testing and have now been removed. Closing this since the issue seems to be fixed, but feel free to re-open if you have troubles still :)
Hello ! I am having the same issue when I want to evaluate my model (AntoineBlanot/roberta-large-squadv2) on squad_v2 dataset when using the model-evaluator. How can this issue be fixed?
I’m trying to follow the original announcement blog post, but when I try to run an evaluation on a model I’m getting the error in the screenshot. With some snooping, I see that during the filtering process
get_evaluation_infos
is fetching all theautoevaluate
datasets, and then crashing when the first one’s (lewtun/autoevaluate__imdb)cardData
doesn’t have aneval_info
property.