circle-hit / MuCDN

Code for COLING 2022 accepted paper titled "MuCDN: Mutual Conversational Detachment Network for Emotion Recognition in Multi-Party Conversations"
8 stars 0 forks source link

question about dataloader.py #5

Open R1ckLou opened 1 year ago

R1ckLou commented 1 year ago

Thank you for your work, I want to run training code, but hint I dataloader... there are no EmoryNLPRobertaCometDataset module in py, I did not found in the code, how can I solve this problem?

circle-hit commented 1 year ago

Thank you for your work, I want to run training code, but hint I dataloader... there are no EmoryNLPRobertaCometDataset module in py, I did not found in the code, how can I solve this problem?

So sorry for this mistakes. You could directly use the EmoryNLPDataset module in dataloader.py.

R1ckLou commented 1 year ago

Thank you for your work, I want to run training code, but hint I dataloader... there are no EmoryNLPRobertaCometDataset module in py, I did not found in the code, how can I solve this problem?

So sorry for this mistakes. You could directly use the EmoryNLPDataset module in dataloader.py.

Thank you for your reply, again how do I get the emorynlp_features_roberta_discourse.pkl as well as the emorynlp_features_comet.pkl file? I see no indication in the project.

circle-hit commented 1 year ago

Thank you for your work, I want to run training code, but hint I dataloader... there are no EmoryNLPRobertaCometDataset module in py, I did not found in the code, how can I solve this problem?

So sorry for this mistakes. You could directly use the EmoryNLPDataset module in dataloader.py.

Thank you for your reply, again how do I get the emorynlp_features_roberta_discourse.pkl as well as the emorynlp_features_comet.pkl file? I see no indication in the project.

So sorry for the messy code. Actually we do not need emorynlp_features_comet.pkl in this project, and you could directly delete it in the dataset module. As for the emorynlp_features_roberta_discourse.pkl, you could obtain it through the google drive link in our repo.

R1ckLou commented 1 year ago

Thank you for your work, I want to run training code, but hint I dataloader... there are no EmoryNLPRobertaCometDataset module in py, I did not found in the code, how can I solve this problem?

So sorry for this mistakes. You could directly use the EmoryNLPDataset module in dataloader.py.

Thank you for your reply, again how do I get the emorynlp_features_roberta_discourse.pkl as well as the emorynlp_features_comet.pkl file? I see no indication in the project.

So sorry for the messy code. Actually we do not need emorynlp_features_comet.pkl in this project, and you could directly delete it in the dataset module. As for the emorynlp_features_roberta_discourse.pkl, you could obtain it through the google drive link in our repo.

I'm sorry to bother you again, I did as you said, then it would report an error "pickle.load(open('Preprocessed_features/meld_features_roberta_discourse.pkl', 'rb'), encoding='latin1') ValueError: too many values to unpack (expected 17)". I don't know if I used it wrong, I changed it under the dataloder.py file. Or did I use the pkl file wrong?

circle-hit commented 1 year ago

Thank you for your work, I want to run training code, but hint I dataloader... there are no EmoryNLPRobertaCometDataset module in py, I did not found in the code, how can I solve this problem?

So sorry for this mistakes. You could directly use the EmoryNLPDataset module in dataloader.py.

Thank you for your reply, again how do I get the emorynlp_features_roberta_discourse.pkl as well as the emorynlp_features_comet.pkl file? I see no indication in the project.

So sorry for the messy code. Actually we do not need emorynlp_features_comet.pkl in this project, and you could directly delete it in the dataset module. As for the emorynlp_features_roberta_discourse.pkl, you could obtain it through the google drive link in our repo.

I'm sorry to bother you again, I did as you said, then it would report an error "pickle.load(open('Preprocessed_features/meld_features_roberta_discourse.pkl', 'rb'), encoding='latin1') ValueError: too many values to unpack (expected 17)". I don't know if I used it wrong, I changed it under the dataloder.py file. Or did I use the pkl file wrong?

Please try to replace the code with:

self.speakers, self.emotion_labels, \ self.roberta1, self.roberta2, self.roberta3, self.roberta4, self.discourse_graph, self.discourse_speaker_mask, self.sequential_speaker_mask, \ self.edge_index, self.edge_type, self.edge_type_speaker, self.relation_type, self.relation_type_speaker, self.relation_type_refined, \ self.inter_speaker_graph, self.intra_speaker_graph, self.fc_graph, self.fc_speaker_mask, \ self.intra_relative_distance, self.inter_relative_distance, \ self.sentences, self.trainId, self.testId, self.validId, self.multiId \ = pickle.load(open('emorynlp/emorynlp_features_roberta_discourse.pkl', 'rb'), encoding='latin1')

So sorry again for the messy code and I will then clean it as soon as I can.