I am trying to train an NLP classification model and there is a common error when model predicts "decrease" whenever there is "increase" in input text. I looked into embeddings of both and following is the nearest neighbor list of increase.
As can be seen, decrease is the nearest neighbor. I believe, my model would work better if antonyms can be further apart. Any suggestions how I can reduce error due to this.
Hi,
I am trying to train an NLP classification model and there is a common error when model predicts "decrease" whenever there is "increase" in input text. I looked into embeddings of both and following is the nearest neighbor list of increase.
[(0.8970754742622375, 'decrease'), (0.8135992288589478, 'increases'), (0.7706713080406189, 'increased'), (0.7596212029457092, 'increasing'), (0.7075006365776062, 'decreases'),
As can be seen, decrease is the nearest neighbor. I believe, my model would work better if antonyms can be further apart. Any suggestions how I can reduce error due to this.
Regards, Deepti