MANLP-suda / JML

35 stars 3 forks source link

Joint Multi-modal Aspect-Sentiment Analysis with Auxiliary Cross-modal Relation Detection

Thanks for your stay in this repo. This project aims to jointly extract aspect and opinion in a multi-modal scenario, involving a image-text relation module.paper

🔎 Motivation

⚙️Installation

Make sure the following dependencies are installed.

💾 dataset

🚀 Quick start

There are three steps for training.

Download

💾Data Preprocessing

In path "data/Twitter/twitter15/pre_data", We provide a code for data pre-process. This python file is used to change the raw data to a data form with multi-aspect samples. Setting the path of the data you need and just input:

$ python pre_data.py

Then change the pre data to the form of the approach input, in "data/raw2new.py":

$ python raw2new.py

Relation training

Joint model training

if you have done the front pre-training step, you can just training the module.

🏁 Experiment

📜Citation

@inproceedings{JuZXLLZZ21,
  title     = {Joint Multi-modal Aspect-Sentiment Analysis with Auxiliary Cross-modal Relation Detection},
  author    = {Xincheng Ju and Dong Zhang and Rong Xiao and Junhui Li and Shoushan Li and Min Zhang and Guodong Zhou},
  booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, {EMNLP} 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021},
  year      = {2021},
}

🤘Furthermore

if your have any quesions, you can just propose the confusion in issue window. We are honored to disscuss the problem with you!

Thanks~