agrija9 / Deep-Unsupervised-Domain-Adaptation

Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E. Evaluated on benchmark dataset Office31.
81 stars 22 forks source link

About CORAL Loss #3

Open ZhouWenjun2019 opened 3 years ago

ZhouWenjun2019 commented 3 years ago

There maybe a wrong with CORAL Loss loss = torch.norm(torch.mul((source_covariance-target_covariance), (source_covariance-target_covariance)), p="fro") It should be loss = torch.norm((source_covariance-target_covariance), p="fro")

A-New-Page commented 2 years ago

I agree with @ZhouWenjun2019

agrija9 commented 2 years ago

@ZhouWenjun2019, @A-New-Page,

According to the Deep CORAL paper (https://arxiv.org/pdf/1607.01719.pdf), the CORAL loss is defined as the squared matrix of the Frobenius norm (see equation 1, section 3.1).

My understanding when implementing this method is that if I just take

loss = torch.norm((source_covariance-target_covariance), p="fro")

I am just computing the Frobenius norm but not taking the squared multiplication into account. This means, I am just doing || • ||_F

See the definition of Frobenius Norm (https://mathworld.wolfram.com/FrobeniusNorm.html).

By adding torch.mul((source_covariance-target_covariance), (source_covariance-target_covariance)), I am making sure that I am computing the squared matrix Frobenius norm, i.e. || • ||²_F

Let me know your thoughts.