Closed GabriellaS-K closed 2 years ago
I was using the Hu Liu exclusively, then found the Jocker set to be better but on the data sets I was working with found that combining the union of the 2 data sets was best. The default uses jockers and an augmented hu liu as you discuss here: https://github.com/trinker/sentimentr/issues/125. Jockers discusses how he ceated his data set elsewhere. Hu Liu do as well.
I have used the Sentimentr package to do some sentiment analysis because it includes valence shifters. However I cannot find how this lexicon lexicon::hash_sentiment_jockers_rinker was built, how individual words were scored. From what I understand, the lexicon was originally exported by the syuzhet and is a combination of AFFIN, bing, nrc and syuzhet. Could someone help me to understand how individual words in the lexicon were calculated?
Thanks!