Closed Wongcheukwai closed 3 years ago
The key is self.x_out, which is used in backward function. So the actual optimization objective (for calculating the gradient) is self.x_out, which is -self.gamma * negative _risk.
oh yes TAT, thank you kindly for your patience.
You are welcome :)
Thank you for pointing out this repo for me: https://github.com/kiryor/nnPUlearning/blob/a522c0edd22c54b627eb131e960584950eae5330/pu_loss.py#L51.
However, this is the part about the LOSS:
as you can see, if negative_risk.data < -self.beta, objective is assigned with positive_risk(instead of negative loss) and the self.loss is equal to objective.data (who data.dtype is self.x_out, which is the datatype of negetive_risk). So I guess you have to return postive_loss here?