Open yuhui-zh15 opened 2 years ago
thank you very much. this is a big help.
I take a test. all is right in this version.
https://github.com/yuhui-zh15/pytorch-adda
Epoch [1999/2000] Step [100/149]:d_loss=0.24804 g_loss=4.08300 acc=0.89000
Epoch [2000/2000] Step [100/149]:d_loss=0.24628 g_loss=4.73108 acc=0.89000
=== Evaluating classifier for encoded target domain ===
>>> source only <<<
Avg Loss = 1.1622806254186129, Avg Accuracy = 84.408605%
>>> domain adaption <<<
Avg Loss = 0.4525440482655924, Avg Accuracy = 97.634411%
I think what causes the low adaptation accuracy is that the class labels are swapped by the target encoder. This makes sense because it is an unsupervised task and the target encoder didn't see the class labels.
I've used this code on 2D data: https://github.com/mashaan14/ADDA-toy
You can see in the attached image that the target encoder separates the classes well. But the class labels were swapped.
Thanks for contributing this repo, which is really nice to learn domain adaptation.
Just made some minimal code changes to support latest PyTorch (>= 1.0) and Python (>= 3.6) (https://github.com/corenel/pytorch-adda/pull/29/commits/0f98f5de673d0842f484a6460f2367131c243aad).
Fixed the low adaptation accuracy (10%-15%) mentioned in #27 #26 #22 #15 #10 #8 #7 #1. The bug is due to the different normalization applied to MNIST and USPS. The data loader normalizes all the MNIST images to 0-1, while normalizing all the USPS images to 0-255. Changing the latter to 0-1 leads to normal performance (https://github.com/corenel/pytorch-adda/pull/29/commits/13a295ab5c94a572854cdf0ffa5e63c65e209777):