Open isdeniz opened 2 months ago
I get below error when i am trying to calculate performance metrics for a multi-class classification model.
ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].
There are 3 classes in train and test sets. Below are some related code snippets:
metrics_recom = { "accuracy": partial(calc,func=sklearn.metrics.accuracy_score) , "p_micro": partial(calc,func=sklearn.metrics.precision_score,average='micro'), "p_macro": partial(calc,func=sklearn.metrics.precision_score,average='macro'), "p_w": partial(calc,func=sklearn.metrics.precision_score,average='weighted'), "r_micro": partial(calc,func=sklearn.metrics.recall_score,average='micro'), "r_macro": partial(calc,func=sklearn.metrics.recall_score,average='macro'), "r_w": partial(calc,func=sklearn.metrics.recall_score,average='weighted'), "f_micro": partial(calc,func=sklearn.metrics.f1_score,average='micro'), "f_macro": partial(calc,func=sklearn.metrics.f1_score,average='macro'), "f_w": partial(calc,func=sklearn.metrics.f1_score,average='weighted'), "classificationReport": partial(calc,func=sklearn.metrics.classification_report, output_dict=True) }
results, model_outputs, wrong_pred = model.eval_model(test, verbose=True, **metrics_recom)
I am able to fine-tune the model and get predictions without any problem.
I faced the exact same problem and downgrading to 0.64.3 helped! pip install simpletransformers==0.64.3 Can you try this way?
pip install simpletransformers==0.64.3
I get below error when i am trying to calculate performance metrics for a multi-class classification model.
ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].
There are 3 classes in train and test sets. Below are some related code snippets:
metrics_recom = { "accuracy": partial(calc,func=sklearn.metrics.accuracy_score) , "p_micro": partial(calc,func=sklearn.metrics.precision_score,average='micro'), "p_macro": partial(calc,func=sklearn.metrics.precision_score,average='macro'), "p_w": partial(calc,func=sklearn.metrics.precision_score,average='weighted'), "r_micro": partial(calc,func=sklearn.metrics.recall_score,average='micro'), "r_macro": partial(calc,func=sklearn.metrics.recall_score,average='macro'), "r_w": partial(calc,func=sklearn.metrics.recall_score,average='weighted'), "f_micro": partial(calc,func=sklearn.metrics.f1_score,average='micro'), "f_macro": partial(calc,func=sklearn.metrics.f1_score,average='macro'), "f_w": partial(calc,func=sklearn.metrics.f1_score,average='weighted'), "classificationReport": partial(calc,func=sklearn.metrics.classification_report, output_dict=True) }
results, model_outputs, wrong_pred = model.eval_model(test, verbose=True, **metrics_recom)
I am able to fine-tune the model and get predictions without any problem.