yangli-hub / CMMT-Code

15 stars 2 forks source link

Cross-Modal Multitask Transformer for End-to-End Multimodal Aspect-Based Sentiment Analysis

Author: Li YANG, yang0666@e.ntu.edu.sg

The Corresponding Paper:

Cross-modal multitask transformer for end-to-end multimodal aspect-based sentiment analysis
[https://www.sciencedirect.com/science/article/abs/pii/S0306457324000840](https://www.sciencedirect.com/science/article/pii/S0306457322001479)
The framework of the CMMT model:

![alt text]Screenshot 2024-04-10 at 10 38 04 AM

Data

Requirement

Code Usage

Training for CMMT

sh run_cmmt_crf.sh

Acknowledgements

Citation Information:

Yang, L., Na, J. C., & Yu, J. (2022). Cross-modal multitask transformer for end-to-end multimodal aspect-based sentiment analysis. Information Processing & Management, 59(5), 103038.