nert-nlp / AMR-gs

AMR Parsing via Graph-Sequence Iterative Inference
MIT License
0 stars 0 forks source link

AMR Parsing via Graph-Sequence Iterative Inference

Code for our ACL2020 paper,

AMR Parsing via Graph-Sequence Iterative Inference [preprint]

Deng Cai and Wai Lam.

Requirements

The code has been tested on Python 3.6.

All dependencies are listed in requirements.txt.

The code has two branches:

  1. master branch corresponds to the experiments with graph recategorization.
  2. no-recategorize branch corresponds to the experiments without graph recategorization.

AMR Parsing with Pretrained Models

  1. We are still working on a convenient API for parsing raw sentences. For now, a hacky solution is to convert to your input data into the LDC format (e.g., the novel The Little Prince in LDC format), and pretend it as our test set. You should wrap every sentence like this:

    # ::id 0
    # ::snt This is a sentence. (d / dummy) is used as a placeholder.
    (d / dummy)
  2. Data Preprocessing: Data Preparation step 3-4.

  3. sh work.sh => {load_path}{output_suffix}.pred

  4. sh postprocess_2.0.sh {load_path}{output_suffix}.pred=> {load_path}{output_suffix}.pred.post

Download Links

Model Link
AMR2.0+BERT+GR=Smatch80.2 amr2.0.bert.gr.tar.gz
AMR2.0+BERT=Smatch78.7 amr2.0.bert.tar.gz

Train New Parsers

The following instruction assumes that you're training on AMR 2.0 (LDC2017T10). For AMR 1.0, the procedure is similar.

Data Preparation

  1. unzip the corpus to data/AMR/LDC2017T10.

  2. Prepare training/dev/test splits:

    sh prepare_data.sh -v 2 -p data/AMR/LDC2017T10

  3. Download Artifacts:

    sh download_artifacts.sh

  4. Feature Annotation:

    We use Stanford CoreNLP (version 3.9.2) for lemmatizing, POS tagging, etc.

    sh run_standford_corenlp_server.sh
    sh annotate_features.sh data/AMR/amr_2.0
  5. Data Preprocessing:

    sh preprocess_2.0.sh

  6. Building Vocabs

    sh prepare.sh data/AMR/amr_2.0

Training

sh train.sh data/AMR/amr_2.0

The training process will produce many checkpoints and the corresponding output on dev set. To select the best checkpoint, one can evaluate the dev output files (need to do postprocessing first). It is recommended to use fast smatch for model selection.

Evaluation

For evaluation, following Parsing with Pretrained Models step 2-3, then sh compute_smatch {load_path}{output_suffix}.pred.post data/AMR/amr_2.0/test.txt.

Notes

  1. We adopted the code snippets from stog for data preprocessing.

  2. The dbpedia-spotlight occasionally does not work. Therefore, we have disabled it.

Contact

For any questions, please drop an email to Deng Cai.