WangYZ1608 / Knowledge-Distillation-via-ND

The official implementation for paper: Improving Knowledge Distillation via Regularizing Feature Norm and Direction
13 stars 2 forks source link

ValueError: too many values to unpack (expected 2) #4

Closed HanGuangXin closed 1 year ago

HanGuangXin commented 1 year ago

File "train_kd.py", line 331, in train t_emb, t_logits = teacher(images) ValueError: too many values to unpack (expected 2)

WangYZ1608 commented 1 year ago

Officially implemented resnet or other models, you need to make a little modification to extract the embedding features. As follows:

def forward_impl(self, x: Tensor, embed):
        ...
        ...
        x = self.avgpool(x)
        emb_fea = torch.flatten(x, 1)
        logits = self.fc(emb_fea)
        if embed:
            return emb_fea, logits
        else:
            return logits
HanGuangXin commented 1 year ago

Yes, I guess so.