sbintuitions / JMTEB

The evaluation scripts of JMTEB (Japanese Massive Text Embedding Benchmark)
Creative Commons Attribution Share Alike 4.0 International
33 stars 9 forks source link

[Quesiton] Imbalanced data for classifiers in classification tasks #79

Open omihub777 opened 1 month ago

omihub777 commented 1 month ago

Thank you for all your hard work.

I've noticed that in your implementation of classifiers in ClassificationEvaluator, it seems that classifiers like LogisticRegression and kNN are trained on the entire training datasets even for extremely imbalanced data such as amazon_counterfactual dataset, where 90% of the labels are 0 (stats-ja). In the original MTEB, this issue is addressed by undersampling the training dataset to achieve a balanced distribution before fitting LogisticRegression.

Could you elaborate on your design choice for training on the entire dataset? Are there specific reasons for this approach? If I am missing something, feel free to correct me. Thank you!

lsz05 commented 3 days ago

Thank you for your question.

Simply we don't have enough evidence showing we should or should not conduct undersampling, then we chose the simpler one (do nothing about the balance of labels). Running without undersampling didn't cause critical problems, so we didn't consider undersampling.

Another point is if we do undersampling, the size of data will be largely reduced. Considering there is a training process with a statistical classifier, we prioritized the size of data to the risk of imbalanced labels.

If you have evidences about the necessity of adjusting the balance of labels, would you share with us?