wutong16 / Adversarial_Long-Tail

[ CVPR 2021 Oral ] Pytorch implementation for "Adversarial Robustness under Long-Tailed Distribution"
100 stars 10 forks source link

Elaborate ni/ny #1

Open DinSangrasi opened 2 years ago

DinSangrasi commented 2 years ago

Hi there, thanks for the nice work. Just wondering if you could resolve my confusion about ni/ny. In the paper it represents the margin from ground-truth class to negative class, however, in the code ni represents the samples of the negative class and ny represents the sum of samples of the whole dataset. Moreover, could u also please define the context of the negative and positive classes.

wutong16 commented 2 years ago

Hi @DinSangrasi! Sorry for the delayed reply.

We implement our loss here in the code following Eqn. (9), where the margin is defined here following Eqn. (8) and the bias is defined here. I guess maybe the ambiguity is introduced by the definition of bias where we include the sum of class, however, the first term of Eqn. (10) is deducted as below:

image

We introduce the sum of samples for normalization to avoid the bias being too large, and this implementation is still consistent with the theoretical formulation.

The positive classes denote the ground truth and the negative classes denote the others.

Hope this helps!