Closed xiaoyong closed 5 years ago
@xiaoyong that's an interesting aspect. However, we did not do this -- we assumed batches of perfect-labeled source and pseudo-labeled target data are available at training time. Because the pseudo-labeled target images will have some mistakes/label-noise, we expect some loss in performance if we don't use batches of perfectly-labeled source data during training.
To deal with the situation when source images are not available, there is a cool and very straightforward technique from Derek Hoiem's group called "Learning without forgetting", which you might have a look at: http://zli115.web.engr.illinois.edu/learning-without-forgetting/
I see. Thanks for your info.
Hi,
After pseudo-labels of unlabeled target images are generated, you re-train the baseline source model jointly on the combined set of source and target images. However, the source images might not always be available. Did you try re-training on pseudo-labeled target images only? What is the expected performance?