rungjoo / CoMPM

Context Modeling with Speaker's Pre-trained Memory Tracking for Emotion Recognition in Conversation (NAACL 2022)
62 stars 14 forks source link

CoMPM: Context Modeling with Speaker's Pre-trained Memory Tracking for Emotion Recognition in Conversation (NAACL 2022)

model The overall flow of our model

Requirements

  1. Pytorch 1.8
  2. Python 3.6
  3. Transformer 4.4.0
  4. sklearn

Datasets

Each data is split into train/dev/test in the dataset folder.

  1. IEMOCAP
  2. DailyDialog
  3. MELD
  4. EmoryNLP

Train

For CoMPM, CoMPM(s), CoMPM(f)

In this code, the batch size = 1. We do not add padding when the batch is greater than 1.

Argument

python3 train.py --initial {pretrained or scratch} --cls {emotion or sentiment} --dataset {dataset} {--freeze}

For a combination of CoM and PM (based on different model)

Argument

For CoM or PM

cd CoM or PM
python3 train.py {--argument}

Testing with pretrained CoMPM

python3 test.py

Test result for one seed. In the paper, the performance of CoMPM was reported as an average of three seeds.

Model Dataset (emotion) Performace: one seed (paper)
CoMPM IEMOCAP 66.33 (66.33)
CoMPM DailyDialog 52.46/60.41 (53.15/60.34)
CoMPM MELD 65.53 (66.52)
CoMPM EmoryNLP 38.56 (37.37)

Citation

@inproceedings{lee-lee-2022-compm,
    title = "{C}o{MPM}: Context Modeling with Speaker{'}s Pre-trained Memory Tracking for Emotion Recognition in Conversation",
    author = "Lee, Joosung  and
      Lee, Wooin",
    booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
    month = jul,
    year = "2022",
    address = "Seattle, United States",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.naacl-main.416",
    doi = "10.18653/v1/2022.naacl-main.416",
    pages = "5669--5679",
}