Open NicolaBernini opened 4 years ago
Universal Domain Adaptation through Self Supervision
NN as $Y = f(X)$ with
In the Training Set we have
In the Test Set andin Production we have
The underlying assumption for the NN to work well in practice is $P(X{training})$ is very similar to $P(X{test})$ so that both the training and test instances are taken from the same distribution Otherwise we are dealing with a domain adaptation problem as the Training Domain has changed with respect to the Test Domain Let's use the $\tilde P()$ to identify a changed distribution
In a DNN there are multiple layers hence multiple domains, let's say $X{i} = f{i}(X_{i-1}) \quad i \in \mathbb{N}^{+}$ so
The PDF divergence certainly affects the input domain $\tilde P(X{train})$ but then it propagates through the DNN causing a domain shift also in deeper layers $\tilde P(X{i}) \quad i>0$
The level of permeation of the divergence depends on the learned features transferability : the more they are transferable, the less the permeation
This is also a key factor to achieve generalization
There can be 2 types of divergences
data distribution only which is called homoegeneous domain adaptation
involving dimensionality which is called heterogeneous domain adaptation
Overview
Universal Domain Adaptation through Self Supervision
https://arxiv.org/abs/2002.07953
NOTE
For the best rendering please install and activate Tex all the things - Chrome Plugin which provides browser side math rendering
If it is active you should see the following inline math $a=b$ and math equation
$$ a x^{2} + b x + c = 0 \quad x \in \mathbb{R} $$
correctly