Open sky-2002 opened 1 year ago
Hi there!
Those are indeed very good examples of the shortcomings of REBEL. Negation is usually hard to handle by many models, and it seems REBEL is also vulnerable to it. We didn't explore hard/negative examples with detail, but I imagine there could be some robust training approaches that could help handle these situations.
Thanks @LittlePea13 , can you point me to resources that discuss about this thing, handling negations. Because I think this can cause issues, if its not giving enough emphasis on the negations. What approaches can I use, specifically during inference? I am looking for some solution at inference time, because I am not training or finetuning REBEL at the moment. @m0baxter @tomasonjo
Sorry, I was a bit AFK during August. I am afraid it would be very hard to deal with this at inference time. You could try to run an NLI model, or this Triplet Critic to give a probability score to each triplet and based on that filter out possible negatives. I expect the Triplet Critic to work better for neutral statements rather than contradictions, but you can use NLI to help out there as well. You can use the model I linked, as if you check how to use its multitask version, you should be able to obtain both NLI and Triplet Critic scores with a single forward pass.
Hi @LittlePea13 , I was just experimenting with REBEL, trying to see how it responds. Following are a few negative samples and relations extracted:
These negative samples are factually correct, but they extract relations not giving enough importance to the negation word. What are your views on it?