Open byte-dance opened 1 year ago
@byte-dance I encountered the same issue. For now Ive changed self.head_fc to be :
self.head_fc = nn.Sequential(nn.Linear(num_classes, dim_in), nn.BatchNorm1d(dim_in), nn.ReLU(inplace=True), nn.Linear(dim_in, feat_dim))
Specifically, we have class-specific weights w1, w2, . . . , wK after a nonlinear transformation MLP as prototypes zc1 , zc2 , . . . , zcK .
Maybe authors want to get centers_logits in the shape of (class, feat_dim). So, I think remove .T
would be better?
code is right. running successfully
code is right. running successfully!
features = torch.cat([feat.unsqueeze(1), feat.unsqueeze(1)], dim=1) # (N,2,dim) scl_loss = criterion_scl(centers, features, targets) # centers: [class_num, dim]
code is right. running successfully!
features = torch.cat([feat.unsqueeze(1), feat.unsqueeze(1)], dim=1) # (N,2,dim) scl_loss = criterion_scl(centers, features, targets) # centers: [class_num, dim]
class BCLModel(nn.Module):