Open Z-yolo opened 2 years ago
Thanks for your attention! We upload the parsed data files and the way to get them. For any other question, feel free to contact me.
Thank you very much for your offer, and good luck with your research!
There are two questions I would like to ask, the first one: regarding the discourse_graph, discourse_speaker_mask, inter_speaker_graph, intra_speaker_graph, intra_relative_distance, inter_relative_distance, mentioned in meld/meld_features_roberta_discourse.pkl, how are these derived; and the second: how the implicit detachment mentioned in the paper was derived?
There are two questions I would like to ask, the first one: regarding the discourse_graph, discourse_speaker_mask, inter_speaker_graph, intra_speaker_graph, intra_relative_distance, inter_relative_distance, mentioned in meld/meld_features_roberta_discourse.pkl, how are these derived; and the second: how the implicit detachment mentioned in the paper was derived?
Q1: These can be obtained by running process_data.py. Q2: Implicit detachment is derived from the attention martrix of MHA (line 97&98 in models/mucdn.py)
Thank you very much for your offer and have a nice life!
Thank you very much for your work open source, but we are more curious about some files in your process_data, such as train/valid/test_parsed.json, are these files made by you or where did you get them, I wonder if it is convenient for you to tell us, thanks a lot!
with open('./erc_data/meld/train_parsed.json') as f: parsed_train = json.load(f) with open('./erc_data/meld/valid_parsed.json') as f: parsed_valid = json.load(f) with open('./erc_data/meld/test_parsed.json') as f: parsed_test = json.load(f)