zhiguowang / BiMPM

BiMPM: Bilateral Multi-Perspective Matching for Natural Language Sentences
Apache License 2.0
438 stars 150 forks source link

Can we have the configuration files to reproduce the results in the paper? #10

Open xycforgithub opened 7 years ago

xycforgithub commented 7 years ago

Hi, I'm trying to reproduce your result on SNLI - can we have the configuration files for that? Some configurations seems unclear to me. E.g., NER_dim, POS_dim, max_char_per_word, etc. Can we reproduce all the results using the default parameters? Thank you very much!

zhiguowang commented 7 years ago

I didn't use NER_dim and POS_dim for the SNLI experiment. These options are added for some other internal experiments. These options can be activated by setting "with_NER" and "with_POS" as true.

I guess you can reproduce my results if you use the similar config file as https://drive.google.com/file/d/0B0PlTAo--BnaQ3N4cXR1b0Z0YU0/view

xycforgithub commented 7 years ago

Thanks for the response! However I only get accuracy 82.05% on SNLI with the setting in the config file. Have you run the experiment using that file?

zhiguowang commented 7 years ago

I'm on vocation now. I will find my config for you when I back to work.

On Tue, Jun 27, 2017 at 5:48 PM, xycforgithub notifications@github.com wrote:

Thanks for the response! However I only get accuracy 82.05% on SNLI with the setting in the config file. Have you run the experiment using that file?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/zhiguowang/BiMPM/issues/10#issuecomment-311495981, or mute the thread https://github.com/notifications/unsubscribe-auth/AYEV-WWZBUMrItoPKpEr1195ldMYVhFtks5sIXiSgaJpZM4OEONK .

zhiguowang commented 7 years ago

Here is one of my config for SNLI experiment: MP_dim=10, NER_dim=20, POS_dim=20, aggregation_layer_num=2, aggregation_lstm_dim=100, base_dir='/u/zhigwang/zhigwang1/sentence_match/snli', batch_size=60, char_emb_dim=20, char_lstm_dim=100, context_layer_num=2, context_lstm_dim=100, dropout_rate=0.1, fix_word_vec=True, highway_layer_num=1, lambda_l2=0.0, learning_rate=0.001, lex_decompsition_dim=-1, max_char_per_word=10, max_epochs=10, max_sent_length=100, optimize_type='adam', suffix='snli_7', with_NER=False, with_POS=False, with_aggregation_highway=True, with_filter_layer=False, with_highway=True, with_lex_decomposition=False, with_match_highway=True, wo_attentive_match=False, wo_char=False, wo_full_match=False, wo_left_match=False, wo_max_attentive_match=False, wo_maxpool_match=False, wo_right_match=False, word_level_MP_dim=-1

With this config, I got 87.31 on dev set.

cactiball commented 7 years ago

Could you please also share the configuration file for the WikiQA and TrecQA experiments to achieve your best results in the paper? Thank you very much!