FlamieZhu / Balanced-Contrastive-Learning

Code Release for “Balanced Contrastive Learning for Long-Tailed Visual Recognition”
MIT License
99 stars 13 forks source link

question about the code 'centers_logits = F.normalize(self.head_fc(self.fc.weight.T), dim=1)' #9

Open byte-dance opened 1 year ago

byte-dance commented 1 year ago

class BCLModel(nn.Module):

    self.fc = nn.Linear(dim_in, num_classes)
    self.head_fc = nn.Sequential(nn.Linear(dim_in, dim_in), nn.BatchNorm1d(dim_in), nn.ReLU(inplace=True),
                               nn.Linear(dim_in, feat_dim))
def forward(self, x):
    centers_logits = F.normalize(self.head_fc(self.fc.weight.T), dim=1)

 self.fc.weight.T has shape (dim_in , num_classes), how can it be sent to self.head_fc ?
adv010 commented 10 months ago

@byte-dance I encountered the same issue. For now Ive changed self.head_fc to be : self.head_fc = nn.Sequential(nn.Linear(num_classes, dim_in), nn.BatchNorm1d(dim_in), nn.ReLU(inplace=True), nn.Linear(dim_in, feat_dim))

lgX1123 commented 6 months ago

Specifically, we have class-specific weights w1, w2, . . . , wK after a nonlinear transformation MLP as prototypes zc1 , zc2 , . . . , zcK .

Maybe authors want to get centers_logits in the shape of (class, feat_dim). So, I think remove .T would be better?

ericyq commented 4 months ago

code is right. running successfully

ericyq commented 4 months ago

code is right. running successfully!

features = torch.cat([feat.unsqueeze(1), feat.unsqueeze(1)], dim=1) # (N,2,dim) scl_loss = criterion_scl(centers, features, targets) # centers: [class_num, dim]

ericyq commented 4 months ago

code is right. running successfully!

features = torch.cat([feat.unsqueeze(1), feat.unsqueeze(1)], dim=1) # (N,2,dim) scl_loss = criterion_scl(centers, features, targets) # centers: [class_num, dim]