cjhutto / vaderSentiment

VADER Sentiment Analysis. VADER (Valence Aware Dictionary and sEntiment Reasoner) is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media, and works well on texts from other domains.
MIT License
4.47k stars 1k forks source link

Weightage given to smileys (when negated) #28

Open sourcedexter opened 7 years ago

sourcedexter commented 7 years ago

So, Vader gives 1.0 positive for " :) " and 1.0 negative for " :( " and with that I know that the smileys are being detected correctly. However, it fails to identify the polarity correctly for this particular case:

sentence = "nothing for redheads :(" polarity got: {'neg': 0.0, 'neu': 0.555, 'pos': 0.445, 'compound': 0.3412}

It is surprising that this sentence is tipping towards the positive polarity while the negative remains at 0.0. Now if I remove the smiley and find the polarity, this is what I get:

sentence = "nothing for redheads" polarity got: {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}

And this result is absolutely correct. It is a neutral statement. So, why is that a negative lexicon, tending the sentence towards a positive outcome? I wanted to know if I can manipulate the weight of smileys to reduce such errors. Since Vader is capable of handling many tricky sentences, this should not have been an issue right ? or is it just an outlier condition ?

Hiestaa commented 7 years ago

The problem comes from the word nothing. It is considered as a negating word and there is a rule that will 'flip' the valence of a token when such word is found preceding your input.

If you try the same case with a sentence slightly longer, it won't make the link between the negating word and the smiley, and you will get the expected weightage:

sentence: nothing at all for redheads :( polarity got: {'neg': 0.367, 'neu': 0.633, 'pos': 0.0, 'compound': -0.44}

ltbringer commented 7 years ago

Would it be better if the negating words don't negate emoticons? Emoticons, unlike other words in a sentence, have a meaning which summarises the emotion felt while writing the text. So if they are treated as a separate sentence.

example: "nothing for redheads :(" -> "nothing for redheads. sad."

I don't mean to say, swap the emoticon with a word, but the scoring to be done this way?

Hiestaa commented 7 years ago

This makes sense for sure, then to say if this is better or not would require implementing the rule, measuring the accuracy of the new algorithm and comparing it to the current one. The repo contains quite a lot of human-scored sentences, so that shouldn't be an issue if you want to spend the time to look into this idea 😃

ltbringer commented 7 years ago

@Hiestaa I do want to try out a few things, but I don't yet understand how to go about the values in the vader_lexicon.txt.

Going through the source code I inferred that only the word and the valence? are being taken into consideration for scoring sentences. So if I need to add more words to the list, do the other two values not matter?

Hiestaa commented 7 years ago

@CodeWingX The README provides a description of the values in the lexicon:

We collected intensity ratings on each of our candidate lexical features from ten independent human raters (for a total of 90,000+ ratings). Features were rated on a scale from "[–4] Extremely Negative" to "[4] Extremely Positive", with allowance for "[0] Neutral (or Neither, N/A)".

We kept every lexical feature that had a non-zero mean rating, and whose standard deviation was less than 2.5 as determined by the aggregate of ten independent raters.

I assume based on this information that vader_lexicon.txt holds the following format:

Token Valence Standard Deviation Human Ratings
(:< -0.2 2.03961 [-2, -3, 1, 1, 2, -1, 2, 1, -4, 1]
amorphous -0.2 0.4 [0, 0, 0, 0, 0, 0, -1, 0, 0, -1]

If you want to follow the same rigorous process as the author of the study, you should find 10 independent humans to evaluate each word you want to add to the lexicon, make sure the standard deviation doesn't exceed 2.5, and take the average rating for the valence. This will keep the file consistent.

Now if you just want to make the algorithm work on these new cases quickly, the standard deviation and human ratings are indeed not necessary. Only the token and valences are used.

cjhutto commented 4 years ago

Is there a study that shows empirical effects of emoticons and emojis in negated sentences? I've seen papers showing emojis/emoticons as sentence negations themselves... e.g., "I love my job 👎 ". But I haven't (yet) found anything describing a negation effect on the emoji/emoticon... e.g., your example "nothing for redheads :(". My intuition is that the general rule (in most cases) is that sentence negations (not, isn't, nothing, ain't) don't affect emoji/emoticon, and that in most cases the emoji/emoticon is what people actually key in on for judging overall sentiment.