leo-p / papers

Papers and their summary (in issue)
22 stars 4 forks source link

Domain Adaptation with Randomized Multilinear Adversarial Networks #29

Open leo-p opened 7 years ago

leo-p commented 7 years ago

https://arxiv.org/pdf/1705.10667.pdf

Adversarial learning has been successfully embedded into deep networks to learn transferable features for domain adaptation, which reduce distribution discrepancy between the source and target domains and improve generalization performance. Prior domain adversarial adaptation methods could not align complex multimode distributions since the discriminative structures and inter-layer interactions across multiple domain-specific layers have not been exploited for distribution alignment. In this paper, we present randomized multilinear adversarial networks (RMAN), which exploit multiple feature layers and the classifier layer based on a randomized multilinear adversary to enable both deep and discriminative adversarial adaptation. The learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments demonstrate that our models exceed the state-of-the-art results on standard domain adaptation datasets.

leo-p commented 7 years ago

Summary:

Inner-workings:

Basically an improvement on RevGrad where instead of using the last embedding layer for the discriminator, a bunch of them is used. To avoid dimension explosion when using the tensor product of all layers they instead use a randomized multi-linear representation:

screen shot 2017-06-01 at 5 35 46 pm

Where:

Architecture:

screen shot 2017-06-01 at 5 34 44 pm

They use the usual losses for domain adaptation with:

Results:

Improvement on state-of-the-art results for most tasks in the dataset, very easy to implement with any pre-trained network out of the box.