leo-p / papers

Papers and their summary (in issue)
22 stars 4 forks source link

Domain-Adversarial Training of Neural Networks #6

Open leo-p opened 7 years ago

leo-p commented 7 years ago

https://arxiv.org/pdf/1505.07818.pdf

We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.

leo-p commented 7 years ago

Summary:

Architecture:

The basic idea behind this paper is to use a standard classifier network and chose one layer that will be the feature representation. The network before this layer is called the Feature Extractor and after the Label Predictor. Then a new network called a Domain Classifier is introduced that takes as input the extracted feature, its objective is to tell if a computed feature embedding came from an image from the source or target dataset.

At training the aim is to minimize the loss of the Label Predictor while maximizing the loss of the Domain Classifier. In theory we should end up with a feature embedding where the discriminator can't tell if the image came from the source or target domain, thus the domain shift should have been eliminated.

To maximize the domain loss, a new layer is introduced, the Gradient Reversal Layer which is equal to the identity during the forward-pass but reverse the gradient in the back-propagation phase. This enables the network to be trained using simple gradient descent algorithms.

What is interesting with this approach is that any initial network can be used by simply adding a few new set of layers for the domain classifiers. Below is a generic architecture.

screen shot 2017-04-18 at 1 59 53 pm

Results:

Their approach is working but for some domain adaptation it completely fails and overall its performance are not great. Since then the state-of-the-art has changed, see DANN combined with GAN or ADDA.