Closed FBdata closed 6 years ago
On Thu, Dec 7, 2017 at 11:04 PM, FBdata notifications@github.com wrote:
Hello,
Question 1 : I have a binary classification problem (click/no click). I would like to retrieve the prediction values from test set (0 or 1) after training, testing and evaluation the NeuralFM (or FM) model in order to calculate some metrics like precision-recall. Is that possible ?
Of course you can. Seems to be quite standard operation.
Question 2: What is the "keep_prob" argument in NeuralFM.py ? I don't know how to use it.
It is a number in (0,1] denoting how the percentage of neurons to keep in dropout. For example, keep_prob=1 means the dropout is disabled, keep_prob=0.6 means the dropout ratio is 0.4.
Anyway, thanks for this python library, that's a great too to play with. Thanks in advance for the answers.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/hexiangnan/neural_factorization_machine/issues/3, or mute the thread https://github.com/notifications/unsubscribe-auth/ABGxjm-NLcEmBkxnNprjRJ5risVmfNLeks5s9_6MgaJpZM4Q5sAi .
-- Best Regards, Xiangnan He
Thanks for the answer which makes things clear.
Hello,
Question 1 : I have a binary classification problem (click/no click). I would like to retrieve the prediction values from test set (0 or 1) after training, testing and evaluation the NeuralFM (or FM) model in order to calculate some metrics like precision-recall. Is that possible ?
Question 2: What is the "keep_prob" argument in NeuralFM.py ? I don't know how to use it.
Anyway, thanks for this python library, that's a great too to play with. Thanks in advance for the answers.