Open ZichaoHuang opened 7 years ago
Hi, that is used for calculating the filtered metrics. If you use train only, it will not correctly filter out true entities in valid and test set when calculate the filtered scores.
Sent from my iPhone
On Mar 17, 2017, at 7:27 AM, Zichao Huang notifications@github.com wrote:
In gen_ht_r, you use both the validation set and test set to generate ht_r and tr_h. When I change it to use only the training set to generate ht_r and tr_h(I run ProjE_softmax.py as you recommended in the README file), filtered mean rank and hits@10 only reaches 74.3 and 0.675 after 12 iters, but according to your appendix in your AAAI paper, the model should yield filtered hits@10 over 0.8 around 12 iters.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
But shouldn't we only use the training set during training? If we use the validation set and test set during training, how do we know if the model is overfitting or not?
The training is using the train-hrt only. The hrt is used for evaluation.
Sent from my iPhone
On Mar 17, 2017, at 11:13 AM, Zichao Huang notifications@github.com wrote:
But shouldn't we only use the training set during training? If we use the validation set and test set during training, how do we know if the model is overfitting or not?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
Hi I think there might be a problem, I'll look into that.
Hi, please check data_generator_func, it took an input_queue from self.raw_training_data, which is the generated raw training data from self.__train_hr_t
and self.__train_tr_h
.
The tr_h
and hr_t
are used to filter out negative sampling results. This means we generate corrupted triples only if they never exists in the entire triple universe.
Hope this explanation helps.
https://github.com/bxshi/ProjE/blob/master/ProjE_softmax.py#L546 uses hr_t
and tr_h
for evaluation so if you change them the model can not report accurate results.
Thanks for the explanation.
But when the data_generator_func
function is called as a process target in main, model.hr_t
and model.tr_h
is also passed to the target as parameters.
Hi I have updated the code. Now it does not use hr_t
and tr_h
to filter out negative examples.
Thanks for the updates. I run the new ProjE model for over 25 epochs and it seems that the filtered hits@10 on test set converges around 0.782. Is that normal?
If you have time I would suggest lower the learning rate say 1e-3 or 4e-5 and try again. My parameters are based on the wrong set so you may need to try some other settings. Meanwhile I'm also testing and will update the arxiv version once it's done.
Sent from my iPhone
On Mar 18, 2017, at 7:42 AM, Zichao Huang notifications@github.com wrote:
Thanks for the updates. I run the new ProjE model for over 25 epochs and it seems that the filtered hits@10 on test set converges around 0.782. Is that normal?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
OK, thanks.
Hi,your experiments show relatively consistent performance using negative sampling rates as low as 25%.Which variable is represented negative sampling rates in the code?
@760008522 the parameter is --neg_weight
.
Hi,can you explain how to select negative samples? Which paragraph corresponds to the code?Thank you very much.
@760008522 For example https://github.com/bxshi/ProjE/blob/master/ProjE_softmax_noweight.py#L313, the hr_tlist_weight
is the masking weight which has a shape of [None, model.n_entity]
. It is generated inside data_generator_func
https://github.com/bxshi/ProjE/blob/master/ProjE_softmax_noweight.py#L425
@ZichaoHuang I also run the ProjE_softmax model for over 30 epochs and The result is below: [VALID] ITER 30 [HEAD PREDICTION] MEAN RANK: 276.7 FILTERED MEAN RANK 81.4 HIT@10 0.412 FILTERED HIT@10 0.733 [VALID] ITER 30 [TAIL PREDICTION] MEAN RANK: 174.7 FILTERED MEAN RANK 59.6 HIT@10 0.497 FILTERED HIT@10 0.782 [TEST] ITER 30 [HEAD PREDICTION] MEAN RANK: 273.3 FILTERED MEAN RANK 80.5 HIT@10 0.416 FILTERED HIT@10 0.735 [TEST] ITER 30 [TAIL PREDICTION] MEAN RANK: 180.9 FILTERED MEAN RANK 60.0 HIT@10 0.494 FILTERED HIT@10 0.784
Hi @ocsponge, I have the same issue: my final result after training is about 10 points below the Hits@10 reported in the paper. I tried reducing the learning rate, but it didn't work. Were you able to solve this issue in your environment?
Hi bxshi, In gen_ht_r, it seems that you use both the validation set and test set to generate
ht_r
andtr_h
. When I change it to use only the training set to generateht_r
andtr_h
(I run ProjE_softmax.py as you recommended in the README file), filtered mean rank and hits@10 only reaches 74.3 and 0.675 after 12 iters, but according to the appendix in your AAAI paper, the model should yield filtered hits@10 over 0.8 around 12 iters.