valeoai / ADVENT

Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation
https://arxiv.org/abs/1811.12833
Other
379 stars 69 forks source link

About implementation of class-ratio priors #9

Closed Katexiang closed 4 years ago

Katexiang commented 4 years ago

Thanks for your great work, but I have a little confusion about the class-ratio priors. I can't find the implementation of class-ratio priors in your project. And I wonder whether the implementation of CP is what I describe. First,calculate the distribution of class from source target and get the ps. Then, pass the feature map after softmax layer of target domain picture to a global average pooling layer to get the mean of class score. px Finally, ps subtract px,and add them if the subtraction result is over 0 among class channel. Besides, I want to ask another question,about the loss function lcp. Why is the subtraction result should be over 0? Maybe a modulus of the result can help to let the target domain's distribution close to source domain's distribution. Thanks!

himalayajain commented 4 years ago

Hello @Katexiang,

Your description of class-ratio prior seems good. Just make sure, px is a vector of dimension C (number of classes).
Now regarding why we use ReLU over modulus, using modulus means to force the target class prior to be same as source, given the domain gap this would hurt. On the other hand, we use class-ratio prior to avoid losing some of the classes due to entropy minimization, the proposed loss enforces that all the classes are present to some extent but doesn't care if its presence is more compared to source.

Himalaya