Closed deep0learning closed 5 years ago
In my code, the loss named l1_loss is the discrepancy loss.
Can you explain why we need this loss as all the domain have the same number of classes, i.e., 31 for office-31 dataset ?
The classifiers are trained on different source domains, hence they might have the disagreement on the prediction for target samples especially the target samples near class boundaries. Intuitively, the same target sample predicted by different classifiers should get the same prediction. (It is descripted in the paper.)
In your paper, you have mentioned the discrepancy loss for aligning the classifiers. In your code I did not find the discrepancy loss for classifiers. Did I miss that? Can you please explain it?
Thank you in advanced.