Closed jack-pan-ai closed 3 years ago
@PancheLone Thanks for your interest in our work! Please make sure that Eq. (6) does not depend on \gamma. It just uses implicit feedback Y (, which is observable) . Instead of using the unobservable quantity \gamma. How to estimate \gamma in the loss functions using only observable data is the key in our paper.
Thanks for your prompt reply a lot! But according to your definition in Eq.(4), we have to know P(R_{ui}=1), which is \gamma, so that we can calculate the \delta in your Eq.(6).
Thanks for your help in advance, and looking forward to your reply!
@PancheLone In Eq.(6) (and the loss functions of some other methods), \delta^{(1)} and \delta^{(0)} are used, both of which do not depend on \gamma, right? For example, \delta^{(1)} = - \log (\hat{R}) where \hat{R} is a predicted value, not the true \gamma.
\Eq. (4) is the quantity that we want to maximize, and is not used in the training process.
Ooooo, it's the predicted value!!!, It seems like I got your idea; predicted value means to use the learned embedding vector!!! Thank you very much for your kind explanation.
Moreover, I also followed your twitter, and hope for more valuable and interesting work from you in the future!
Great!
Hello, in your proof, Proposition 3.1. (Bias of WMF estimator) The bias of the estimator used in WMF is represented as follows
$\delta$ is as well concerned with \gamma (relevance probability).
The weighted matrix factorization loss function cannot be calculated directly without estimating \gamma (relevance probability). So, may I know how do you use this loss function in the real-world setup?
ps: sorry, I do not know how to type math formula in the reply