Closed caozhen-alex closed 5 years ago
You need to keep the system training until converge. I can easily get a BLEU score of around 14.5 with the released config file. I just committed my results.
Hi Freesunshine,
I just followed the README.md to generate questions. I saw "max_epochs": 15 and "with_POS": true, in config.json. Should I set the max_epochs to a larger number? And I may need to set "with_POS" to False since it seems that u don't use the POS in the paper. Please correct me if I am wrong.
with_POS should be false.
I got your claimed results from ur generated file test.sota.tok.gz.
But I did get a quite different results from my trained model. I believe we used two differen config file. Can you help to check?
Thx, &Merry Xmas!
I checked my NP2P.sota.config.json file, which matches the config I released previously. I also attach the NP2P.sota.config.json with this update. I may check your config file.
Or I suggest you update your repository and retrain your model with the lastest config.
I found several different settings in config.json:
"attention_vec_size": 300, # 100->300 "learning_rate": 0.001, # 0.005->0.001 "lambda_l2": 1e-08, # 0.001->1e-08 "context_lstm_dim": 300, # 100 -> 300 "aggregation_lstm_dim": 300, # 100->300 "with_highway": false, # true -> false "compress_input_dim": 300, # 100->300 "gen_hidden_size": 300, # 100->300 "num_softmax_samples": 1000, # 100->1000
The model is training with the new setting ups! Thx Freesunshine!
Hi Freesunshine,
The results got from the code and config file u provided is 12.01, 17.62, 40.53, which is quite different from the paper reported results 13.98, 18.77, 42.72. I guess if you provide the wrong config file, can you check?