Open silvianacmp opened 7 years ago
Hi,
I don't have a function to do that in the current version. But you can try tweaking it bit by modifying the evaluate
function in graphstate.py
:
def evaluate(self):
num_correct_arcs = .0
num_correct_labeled_arcs = .0
parsed_tuples = self.A.tuples()
....
num_parsed_arcs = len(parsed_tuples)
gold_tuples = self.gold_graph.tuples()
num_gold_arcs = len(gold_tuples)
num_correct_tags = .0
num_parsed_tags = .0
num_gold_tags = .0
visited_nodes = set()
for t_tuple in parsed_tuples:
p,c = t_tuple
p_p,c_p = p,c
# comparing for evaluating
....
where this function is basically doing evaluation by comparing each concept and relation, you can just assign the gold concept and relation to the parsed graph.
Have you solved this problem?
Hello,
I have a use-case in which I would like to evaluate the performance of c-amr using smatch, but when predicting the AMR using gold standard labels for the relations and concepts. I have seen that you have reported such results in the paper explaining your solution so I was wondering whether it is possible to do this using the code publicly available in this repository. Thank you!