Closed Joapfel closed 2 years ago
Hey Johannes,
I ignored the negative examples, and trained the model, although there might be a possibility to improve upon this by using the negative examples. I'm not quite sure how to do that because the supervised training does not account for negative reinforcement. But I'm sure there must be a way to improve this by using the negative examples. I'm not really sure about any citation, I think just mentioning the Github repo would do.
Hi Vamsi,
I would be interested how did you deal with the negative paraphrase examples from the PAWS dataset, e.g. "Although interchangeable, the body pieces on the 2 cars are not similar." vs. "Although similar, the body parts are not interchangeable on the 2 cars." (which are not paraphrases as described here: https://github.com/google-research-datasets/paws ). Are they 1) part of the model training, 2) or are they simply ignored during training, 3) or did you adjust the loss function to increase the loss for negative examples?
I would like to use your model as a baseline in my master thesis. Is there anything I can cite?
Best Johannes