kentonl / e2e-coref

End-to-end Neural Coreference Resolution
Apache License 2.0
518 stars 174 forks source link

error in ceafe metric #126

Open yulinchen99 opened 1 year ago

yulinchen99 commented 1 year ago

Hi, thanks for releasing the code. I recently found that there is a potential error in ceafe metric. As shown below, when the predicted and gold cluster are identical, the ceafe metric will not output 1.0 as others do.

My piece of code

    def get_event2cluster(clusters):
        event2cluster = {}
        for cluster in clusters:
            for eid in cluster:
                # if eid not in even2cluster:
                event2cluster[eid] = tuple(cluster)
        return event2cluster

    def evaluate_documents(documents, metric, beta=1):
        evaluator = Evaluator(metric, beta=beta)
        for document in documents:
            evaluator.update(document.clusters, document.gold, document.mention_to_cluster, document.mention_to_gold)
        return evaluator.get_precision(), evaluator.get_recall(), evaluator.get_f1()

    class Doc:
        def __init__(self, mention2cluster, mention2gold, clusters, gold):
            self.mention_to_cluster = mention2cluster
            self.mention_to_gold = mention2gold
            self.clusters = clusters
            self.gold = gold

    gold = [[1,2,3,4,5], [6,7], [8,9,10, 11,12], [13]]
    pred = [[1,2,3,4,5], [6,7], [8,9,10,11,12], [13]]
    mention2cluster = get_event2cluster(pred)
    mention2gold = get_event2cluster(gold)
    doc= Doc(mention2cluster, mention2gold, pred, gold)
    p, r, f = evaluate_documents([doc], muc)
    print(p, r, f)
    p, r, f = evaluate_documents([doc], b_cubed)
    print(p, r, f)
    p, r, f = evaluate_documents([doc], ceafe)
    print(p, r, f)
    p, r, f = evaluate_documents([doc], lea)
    print(p, r, f)

the output is

1.0 1.0 1.0
1.0 1.0 1.0
1.0 0.75 0.8571428571428571
1.0 1.0 1.0

One way to fix this is to remove the line https://github.com/kentonl/e2e-coref/blob/9d1ee1972f6e34eb5d1dcbb1fd9b9efdf53fc298/metrics.py#L120