SankhaSubhra / GAMO

Generative Adversarial Minority Oversampling
GNU General Public License v3.0
31 stars 10 forks source link

Baseline+CN #5

Open JingLiJJ opened 4 years ago

JingLiJJ commented 4 years ago

Thanks for your effort.

Could you please tell me the training strategy of baseline+CN on fashion-mnist? I just thought the baseline+CN follows end-to-end training (backbone network F + classifier M). But, I obtained the accuracy higher than the reuslts on the paper, and it is even very close to the results of GAMO. Look forward to you reply.

TanmDL commented 3 years ago

Exactly. With the same architecture as mentioned in the paper, I have got a better model accuracy of 83.6 for the Fashion-MNIST dataset. Don't even apply feature space augmentation. Simple, backbone network and classifier M with CE loss gives better accuracy than the paper claimed.

KAISER1997 commented 3 years ago

@TanmDL I am facing the same issue . Were you able to find any fix to this?

Shounak-D commented 3 years ago

@JingLiJJ, @TanmDL, @KAISER1997 There are few reasons I can think of:

  1. The shuffling of the dataset and/or classes is different, resulting in a different imbalanced distribution than the one used for the experiments in the paper.
  2. The batch sizes are different than used in the paper.
  3. This is less likely, but you may be measuring the overall accuracy instead of the ACSA (the mean of the accuracies on the individual classes reported in the paper).

Unless it is the latter point, I suspect that GAMO should still compare favorably to the baselineCN numbers that you get in your setup.