Open JingLiJJ opened 4 years ago
Exactly. With the same architecture as mentioned in the paper, I have got a better model accuracy of 83.6 for the Fashion-MNIST dataset. Don't even apply feature space augmentation. Simple, backbone network and classifier M with CE loss gives better accuracy than the paper claimed.
@TanmDL I am facing the same issue . Were you able to find any fix to this?
@JingLiJJ, @TanmDL, @KAISER1997 There are few reasons I can think of:
Unless it is the latter point, I suspect that GAMO should still compare favorably to the baselineCN numbers that you get in your setup.
Thanks for your effort.
Could you please tell me the training strategy of baseline+CN on fashion-mnist? I just thought the baseline+CN follows end-to-end training (backbone network F + classifier M). But, I obtained the accuracy higher than the reuslts on the paper, and it is even very close to the results of GAMO. Look forward to you reply.