Affective Region Recognition and Fusion Network for Target-Level Multimodal Sentiment Classification (Paper)
search and replace relevant paths res_path = 'feature path'
train the model
python train_and_test.py --bert_model=bert-base-uncased
--output_dir=./outupt
--data_dir=./data/twitter2015 or 2017
--task_name=twitter2015 or 2017
--do_train
test the model
python train_and_test.py --bert_model=bert-base-uncased
--output_dir=./outupt
--data_dir=./data/twitter2015 or 2017
--task_name=twitter2015 or 2017
--do_eval
If you find this useful for your research, please use the following.
@article{jia2023affective,
title={Affective Region Recognition and Fusion Network for Target-Level Multimodal Sentiment Classification},
author={Jia, Li and Ma, Tinghuai and Rong, Huan and Al-Nabhan, Najla},
journal={IEEE Transactions on Emerging Topics in Computing},
year={2023},
publisher={IEEE}
}
This code borrows from TomBERT.
[1] Yang J, Sun M, Sun X. Learning visual sentiment distributions via augmented conditional probability neural network[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2017, 31(1).