kaidic / LDAM-DRW

[NeurIPS 2019] Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss
https://arxiv.org/pdf/1906.07413.pdf
MIT License
647 stars 117 forks source link

how bout both training set and test set are same imbalanced? #10

Open SeunghyunSEO opened 4 years ago

SeunghyunSEO commented 4 years ago

Hi Mr.kaidic

Firstly, Thanks for you sharing your code and paper.

I read your paper and used your code to be impressed.

As i read this paper, there are comments for 2 cases.

But you know, unfortunately real data is more imabalanced and challenging.

Then Here's my question to you. Do you think this LDAM-DRW loss also works to this dataset? I'm doing experiments changing betas, and delta j (m_list) so on.

i would be very much appreciated if you answer to me ! Ty so much kaidic :)

zzw-zwzhang commented 4 years ago

Hi Mr.kaidic

Firstly, Thanks for you sharing your code and paper.

I read your paper and used your code to be impressed.

As i read this paper, there are comments for 2 cases.

    1. training set imbalance, test set is not, and their distributions are different.
    1. training set imbalance, test set is also imbalnced, and their distributions are different.

But you know, unfortunately real data is more imabalanced and challenging.

  • My data distribution is not only imbalanced in training set but in test set too. I mean both sets are imbalanced and have same distribution.
  • and in my dataset some of classes even got 1 instance for 1 class. ( extremely low )

Then Here's my question to you. Do you think this LDAM-DRW loss also works to this dataset? I'm doing experiments changing betas, and delta j (m_list) so on.

i would be very much appreciated if you answer to me ! Ty so much kaidic :)

I am also considering the imbalance of test set, does LDAM-DRW work effectively when it used for imbalanced test set.

Raf-Chen commented 1 year ago

My data distribution is similar with yours ! Have you found some hyperparameter settings work? Thanks a lot! :)