Closed MohammadReza-Babaee closed 4 years ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hey all, any chance anyone else is working around this? I think a neutral label or a standard sentiment score would be great for such an extensive model. Neutral statements are not caught with this adjustment:
classifier('I do not know the answer.') Out[16]: [{'label': 'NEGATIVE', 'score': 0.9995205402374268}]
classifier('This is meant to be a very neutral statement.') Out[17]: [{'label': 'NEGATIVE', 'score': 0.987031102180481}]
classifier('The last president of US is Donald Trump.') Out[18]: [{'label': 'POSITIVE', 'score': 0.9963828325271606}]
classifier('There is going to be an election in two months.') Out[19]: [{'label': 'NEGATIVE', 'score': 0.9604763984680176}]
Just raising this thread again to see if there is a common interest... Cheers!
🚀 Feature request
After performing some experimentation and comparison to VADER, we come to consensus that "pretrained BERT-based Hugging Face transfomer" is performing way beyond the other lexicons, but VADER is also good at social media context + it provides "neutral" label which turns out to be useful in some context.
I was wondering whether it is possible to manipulate the Transformer Sentiment Analysis in a way that it can calculate the "neutral" score?