amazon-science / RefChecker

RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
Apache License 2.0
226 stars 17 forks source link

Logic for merging checker #4

Open Padarn opened 5 months ago

Padarn commented 5 months ago

Hi there,

I see when you're merging in your CheckerBase you take 'Entailment' as a precedence?

https://github.com/amazon-science/RefChecker/blob/64e7c34b5fd4f6af7a5227473458619a3d92ad5b/refchecker/checker/checker_base.py#L6C1-L23C21

I see there is a TODO there, but I'd have thought maybe the default would be any contradition would indicate a problem.

Just curious on the thought process to make sure I understand your approach.

Thanks!

rudongyu commented 5 months ago

Hi, @Padarn. Thanks for raising the question! That part is for merging checking results with different segments as references. Ideally, there should be no difference between selecting "Entailment" or "Contradiction" first, because we take a simplified assumption that the whole reference should be self-consistent. However, conflicts in reference do happen in real-world applications. It is an open research question for how to handle it. If you have any ideas, welcome to discuss in this thread.

As a temporary workaround, we might consider expose the option outside to let users choose precedence when conflicts happen. Thanks again!

Padarn commented 5 months ago

Hey @rudongyu thanks for your response.

I have two rough thoughts on this:

  1. What about providing a more nuanced score of 'agreement' rather than a binary classification? Probably having the LLM classify and then scoring in during aggregation would be better (to avoid calibration problems with the LLM score)

  2. Perhaps just providing an 'Inconsistent' class when it is mixed would be easier to understand?