This is the python3 code for the paper "Open Domain Event Extraction Using Neural Latent Variable Models" in ACL 2019.
Modify the Line 24 and 25 in cache_features.py
.
The fine-tune process need 2 * GTX 1080Ti, if the fine-tune process is costly or somehow failed to complete, please use the initial parameters in allennlp.
Please note that it is optional to finetune the ELMo model if you just want to complete the whole procedure or use the model in somewhere else.
The data is HERE.
sudo chown [YOUR_UERS] [PROCESSED_DIR]
and specify the directories in setting.yaml
manuallypip install -r requirements.txt
to install required packagespython cache_features.py
python train_avitm.py
python generate_slot_topN.py
python decode.py
cd slotcoherence && ./run-oc.sh
visualize_test.ipynb
*.json.pt
: cached features of ODEE input*.json.answer
: decoded full results of a news group*.json.template
: decoded template of a news group*.json.events.topN
: decoded top-N events of a news group*.json.labeled
: labeled events of test splitslotcoherence/slot_head_words.txt
: generated topN head words for each slotPlease cite our ACL 2019 paper:
@inproceedings{DBLP:conf/acl/LiuHZ19,
author = {Xiao Liu and
Heyan Huang and
Yue Zhang},
title = {Open Domain Event Extraction Using Neural Latent Variable Models},
booktitle = {Proceedings of the 57th Conference of the Association for Computational
Linguistics, {ACL} 2019, Florence, Italy, July 28- August 2, 2019,
Volume 1: Long Papers},
pages = {2860--2871},
year = {2019},
crossref = {DBLP:conf/acl/2019-1},
url = {https://www.aclweb.org/anthology/P19-1276/},
timestamp = {Wed, 31 Jul 2019 17:03:52 +0200},
biburl = {https://dblp.org/rec/bib/conf/acl/LiuHZ19},
bibsource = {dblp computer science bibliography, https://dblp.org}
}