coli-saar / am-parser

Modular implementation of an AM dependency parser in AllenNLP.
Apache License 2.0
30 stars 10 forks source link

am-parser

The AM parser is a compositional semantic parser with high accuracy across a wide range of graphbanks. This repository is a modular implementation based on AllenNLP and Pytorch.

Online Demo

Try out the online demo of our parser!

Papers

This repository (jointly with its sister-repository at am-tools) contains the code for several papers:

For a coherent and thorough explanation of the AM parser, you could also look at Jonas Groschwitz' PhD thesis (2019).

Setup

Requirements

We recommend running am-parser using Docker environment.

Later you can detach from container using Ctrl-P + Ctrl-Q command, and reconnect using docker exec -it CONTAINER_NAME.

If you still have problems running the parser check the list of third party packages in the Wiki. This list also contains packages necessary to run branches other than the Master branch.

Internal note: this is already set up on the Saarland servers, see details here.

Training our parser on new graphbanks and graph formalisms

Have a look here

Quick Guide to the Pretrained Models of Lindemann et al. (2019)

This is a quick guide on how to use our already trained models to make predictions, either for official test data to reproduce our results, or on arbitrary sentences.

You can find documentation on how to train the parser on the wiki pages.

Reproducing our experiment results

From the main directory, run bash scripts/predict.sh with the following arguments (or with -h for help):

For example, say you want to do DM parsing and INPUT is the path to your sdp file, then

bash scripts/predict.sh -i INPUT -T DM -o example/

will create a file DM.sdp in the example folder with graphs for the sentences in INPUT, as well as print evaluation scores compared to the gold graphs in INPUT.

With this pre-trained model (this is the MTL+BERT version, corresponding to the bottom-most line in Table 1 in the paper) you should get (labeled) F-scores close to the following on the test sets:

DM id DM ood PAS id PAS ood PSD id PSD ood EDS (Smatch) EDS (EDM) AMR 2017
94.1 90.5 94.9 92.9 81.8 81.6 90.4 85.2 76.3

The F-score for AMR 2017 is considerably better than published in the paper and stems from fixing bugs in the postprocessing. Please note that these evaluation scores were obtained without the -f option and your results might differ slightly depending on your CPU because the parser uses a timeout. This is mainly relevant for AMR. We used Intel Xeon E5-2687W v3 processors.

Getting graphs from raw text

From the main directory, run bash scripts/predict_from_raw_text.sh with the following arguments (or with -h for help):

For example, say you want to do DM parsing and make predictions for the sentences in example/input.txt, then

bash scripts/predict_from_raw_text.sh -i example/input.txt -T DM -o example/

will create a file DM.sdp in the example folder with graphs for the sentences in example/input.txt.

Notes

After the bugix in AMR postprocessing, the parser achieves the following Smatch scores on the test set (average of 5 runs and standard deviations):

AMR 2015 AMR 2017
Single task, GloVe 70.0 +- 0.1 71.2 +- 0.1
Single task, BERT 75.1 +- 0.1 76.0 +- 0.2

Things to play around with

When training your own model, the configuration files have many places where you can make changes and see how it affects parsing performance. There are currently two edge models implemented, the Dozat & Manning 2016 and the Kiperwasser & Goldberg 2016 one. Apart from the edge models, the are also two different loss functions, a softmax log-likelihood and a hinge loss that requires running the CLE algorithm at training time.

Third Party Packages

An overview of the third party packages we used can be found at https://github.com/coli-saar/am-parser/wiki/Third-Party-Packages.