fwd4 / dssm

184 stars 86 forks source link

test-loss not stable #13

Open JenkinsY94 opened 5 years ago

JenkinsY94 commented 5 years ago

Hi, I see that in your code both train and test loss are the same, which is: 1. computed prob of the positive sample using softmax function, 2. compute its logloss against label (always 1).

My question is, the first step depends on randomly sampled negative samples, which makes the losses jumps during my training. I'm curious if you have tried to compute logloss using positive sample's logit only (not depend on negative samples)?