polarisZhao / PFLD-pytorch

PFLD pytorch Implementation
798 stars 197 forks source link

wrong loss penalize? #5

Closed iperov closed 5 years ago

iperov commented 5 years ago

original paper:

For instance,if disabling the geometry and data imbalance functionalities, our loss degenerates to a simple`2loss

your code:

        mat_ratio = torch.mean(attributes_w_n, axis=0)
        mat_ratio = torch.Tensor([
            1 / (x*256) if x > 0 else 1 for x in mat_ratio
        ]).cuda()
        weight_attribute = torch.sum(attributes_w_n.mul(mat_ratio), axis=1)

        l2_distant = torch.sum((landmark_gt - landmarks) * (landmark_gt - landmarks), axis=1)
        return torch.mean(weight_angle * weight_attribute * l2_distant), torch.mean(l2_distant)

1 / (x*256) if x > 0 else 1 for x in mat_ratio

you are decreasing weight_attribute if the geometry and data imbalance functionalities are present, but should be vice versa

iperov commented 5 years ago

seriously ? your repo is wrong implemented

iperov commented 5 years ago

the cleaner sample, i.e. without attributes, the less loss should have.

polarisZhao commented 5 years ago

you can show attribute_gt and weight_attribute in pfld/loss file. you can see the more hard sample the more big weight have.