This repository includes the source code for natural language sentence matching. Basically, the program takes two sentences as input, and predict a label for the two input sentences. You can use this program to deal with tasks like paraphrase identification, natural language inference, duplicate questions identification et al. More details about the underneath model can be found in our paper published in IJCAI 2017. Please cite our paper when you use this program! :heart_eyes:
Both the train and test sets require a tab-separated format. Each line in the train (or test) file corresponds to an instance, and it should be arranged as
label sentence#1 sentence#2 instanceID
For more details about the data format, you can download the SNLI and the Quora Question Pair datasets used in our paper.
You can find the training script at BiMPM/src/SentenceMatchTrainer.py
First, edit the configuration file at ${workspace}/BiMPM/configs/snli.sample.config (or ${workspace}/BiMPM/configs/quora.sample.config ). You need to change the "train_path", "dev_path", "word_vec_path", "model_dir", "suffix" to your own setting.
Second, launch job using the following command line
python ${workspace}/BiMPM/SentenceMatchTrainer.py --config_path ${workspace}/BiMPM/configs/snli.sample.config
You can find the testing script at BiMPM/src/SentenceMatchDecoder.py
python ${workspace}/BiMPM/src/SentenceMatchDecoder.py --in_path ${your_path_to}/dev.tsv --word_vec_path ${your_path_to}/wordvec.txt --out_path ${your_path_to}/result.json --model_prefix ${model_dir}/SentenceMatch.${suffix}
Where "model_dir" and "suffix" are the variables set in your configuration file.
The output file is a json file with the follwing format.
{
{
"ID": "instanceID",
"truth": label,
"sent1": sentence1,
"sent2": sentence2,
"prediction": prediciton,
"probs": probs_for_all_possible_labels
},
{
"ID": "instanceID",
"truth": label,
"sent1": sentence1,
"sent2": sentence2,
"prediction": prediciton,
"probs": probs_for_all_possible_labels
}
}
Please let me know, if you encounter any problems.