Open trungkak opened 5 years ago
According to the original word2vec paper (https://arxiv.org/pdf/1310.4546.pdf), Negative sampling loss is implemented as:
Your implementation for the sum log sample is:
sum_log_sampled = t.bmm(noise, input.unsqueeze(2)).sigmoid().log().sum(1).squeeze()
It's the same as described in the paper except the minus sign, isn't it look like below
sum_log_sampled = (-1* t.bmm(noise, input.unsqueeze(2))).sigmoid().log().sum(1).squeeze()
Am I right ? https://github.com/kefirski/pytorch_NEG_loss/blob/master/NEG_loss/neg.py#L75
Hi, I have the same confusion with u. Can you be sure which one is right? ty
@labixiaoK hi, I can confirm that the author's code is right. He has a line accounting for the negative samples that I did not notice.
noise = self.out_embed(noise).neg()
@labixiaoK hi, I can confirm that the author's code is right. He has a line accounting for the negative samples that I did not notice.
noise = self.out_embed(noise).neg()
oh, it was most careless of me, ty.
According to the original word2vec paper (https://arxiv.org/pdf/1310.4546.pdf), Negative sampling loss is implemented as:
Your implementation for the sum log sample is:
It's the same as described in the paper except the minus sign, isn't it look like below
Am I right ? https://github.com/kefirski/pytorch_NEG_loss/blob/master/NEG_loss/neg.py#L75