Closed metabolean5 closed 1 year ago
For the relation labeling evaluation, the gold EDU segmentation is used, which is commonly used in RST parsing papers.
If you want to use the gold EDU segmentation at the inference stage, you can set use_pred_segmentation=False
of the following function, and feed the input_EDU_breaks
to this function.
For more function input details, you can refer to this part: https://github.com/seq-to-mind/DMRST_Parser/blob/231d8c0d28ba8cba074e29a6ff99e858e4742735/model_depth.py#L172-L183
Awesome, thanks !
Hi thank you for the great work and really cool repo!
I'm interested in relation classification for corpus linguistics studies And I am using your system to classify some relations in the RST dataset.
I used your inference module to generate trees and labels on the rst test set, harmonized the labels, And now I am wondering how you managed to compare the predicted relation between two edus and the actual relations in the gold rst test set.
The problems I have mostly deal with segmentation which is rarely the exact same in the output of the system. Hence it s not easy to find matching pairs of relations between edus to compare.
How do you extract those pairs to actually verify if their relation differ or are the same ? I a; currently using some sentence similarity code to do the mapping :
relation attribution 8_9 Gold Text 1: But he's not so sure about everyone else Pred Text 1: But he's not so sure about everyone else Gold Text 2: "I think Pred Text 2: "I Similarity between Gold Text 1 and Pred Text 1: 100.00% Similarity between Gold Text 2 and Pred Text 2: 66.67%
do you something similar ? thank you for reading