VITA-Group / Self-PU

[ICML2020] "Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training" by Xuxi Chen, Wuyang Chen, Tianlong Chen, Ye Yuan, Chen Gong, Kewei Chen, Zhangyang Wang
MIT License
66 stars 11 forks source link

sth still confuses about nnPUloss #4

Closed Wongcheukwai closed 3 years ago

Wongcheukwai commented 3 years ago

Thank you for pointing out this repo for me: https://github.com/kiryor/nnPUlearning/blob/a522c0edd22c54b627eb131e960584950eae5330/pu_loss.py#L51.

However, this is the part about the LOSS:

    if self.nnpu:
        if negative_risk.data < -self.beta:
            objective = positive_risk - self.beta
            self.x_out = -self.gamma * negative_risk
        else:
            self.x_out = objective
    else:
        self.x_out = objective
    self.loss = xp.array(objective.data, dtype=self.x_out.data.dtype)
    return self.loss,

as you can see, if negative_risk.data < -self.beta, objective is assigned with positive_risk(instead of negative loss) and the self.loss is equal to objective.data (who data.dtype is self.x_out, which is the datatype of negetive_risk). So I guess you have to return postive_loss here?

xxchenxx commented 3 years ago

The key is self.x_out, which is used in backward function. So the actual optimization objective (for calculating the gradient) is self.x_out, which is -self.gamma * negative _risk.

Wongcheukwai commented 3 years ago

oh yes TAT, thank you kindly for your patience.

xxchenxx commented 3 years ago

You are welcome :)