Hi, Shicheng. I downloaded your code yesterday and found that the DANN which trained both on labeled MINIST and unlabeled MINIST-M achieved accuracy as below on the test sets:
epoch: 99, accuracy of the mnist dataset: 0.987900
epoch: 99, accuracy of the mnist_m dataset: 0.907121
As we have known, the original paper got 0.7666 accuracy when transferring MINIST to MINIST-M, while here I got 0.907121. So do you know the reasons behind this? I check the neural network architecture is almost same as the paper.
I'm not Shicheng. The only difference between the original paper and mine is the optimizer, I'm not sure if there are other reasons would lead the mismatch.
Hi, Shicheng. I downloaded your code yesterday and found that the DANN which trained both on labeled MINIST and unlabeled MINIST-M achieved accuracy as below on the test sets:
epoch: 99, accuracy of the mnist dataset: 0.987900 epoch: 99, accuracy of the mnist_m dataset: 0.907121
As we have known, the original paper got 0.7666 accuracy when transferring MINIST to MINIST-M, while here I got 0.907121. So do you know the reasons behind this? I check the neural network architecture is almost same as the paper.
Thanks for your help in advance.