leo-p / papers

Papers and their summary (in issue)
22 stars 4 forks source link

Generate To Adapt - Aligning Domains using Generative Adversarial Networks #22

Open leo-p opened 7 years ago

leo-p commented 7 years ago

https://arxiv.org/pdf/1704.01705v1.pdf

Visual Domain adaptation is an actively researched problem in Computer Vision. In this work, we propose an approach that leverages unsupervised data to bring the source and target distributions closer in a learned joint feature space. We accomplish this by inducing a symbiotic relationship between the learned embedding and a generative adversarial framework. This is in contrast to methods which use an adversarial framework for realistic data generation and retraining deep models with such data. We show the strength and generality of our method by performing experiments on three different tasks: (1) Digit classification (MNIST, SVHN and USPS datasets) (2) Object recognition using OFFICE dataset and (3) Face recognition using the Celebrity Frontal Profile (CFP) dataset.

leo-p commented 7 years ago

Summary:

Architecture:

The total network is composed of several sub-networks:

  1. F, the Feature embedding network that takes as input an image from either the source or target dataset and generate a feature vector.
  2. C, the Classifier network when the image come from the source dataset.
  3. G, the Generative network that learns to generate an image similar to the source dataset using an image embedding from F and a random noise vector.
  4. D, the Discriminator network that tries to guess if an image is either from the source or the generative network.

G and D play a minimax game where D tries to classify the generated samples as fake and G tries to fool D by producing examples that are as realistic as possible.

The scheme for training the network is the following:

screen shot 2017-04-14 at 5 50 22 pm

Results:

Very interesting, the generated image is just a side-product but the overall approach seems to be the state-of-the-art at the time of writing (the paper was published one week ago).