seq-to-mind / DMRST_Parser

One implementation of the paper "DMRST: A Joint Framework for Document-Level Multilingual RST Discourse Segmentation and Parsing".
33 stars 10 forks source link

Introduction

Package Requirements

The model training and inference scripts were tested on following libraries and versions:

  1. pytorch==1.7.1
  2. transformers==4.8.2

Training: How to convert treebanks to our format for this framework

Training: How to train a model with a pre-processed treebank

Inference: Supported Languages

Inference: Data Format

Inference: How to use it for parsing

Citation

If the work is helpful, please cite our papers in your publications, reports, slides, and thesis.

@inproceedings{liu-etal-2021-dmrst,
    title = "{DMRST}: A Joint Framework for Document-Level Multilingual {RST} Discourse Segmentation and Parsing",
    author = "Liu, Zhengyuan and Shi, Ke and Chen, Nancy",
    booktitle = "Proceedings of the 2nd Workshop on Computational Approaches to Discourse",
    month = nov,
    year = "2021",
    address = "Punta Cana, Dominican Republic and Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.codi-main.15",
    pages = "154--164",
}
@inproceedings{liu2020multilingual,
  title={Multilingual Neural RST Discourse Parsing},
  author={Liu, Zhengyuan and Shi, Ke and Chen, Nancy},
  booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
  pages={6730--6738},
  year={2020}
}