UChicago-Computational-Content-Analysis / Readings-Responses-2024-Winter

2 stars 0 forks source link

5. Machine Learning to Classify and Relate Meanings - [E4] Dodds, Peter Sheriden et al. #30

Open lkcao opened 6 months ago

lkcao commented 6 months ago

Post questions here for this week's exemplary readings:

  1. Dodds, Peter Sheriden et al. 2015. “Human language reveals a universal positivity bias.” Proceedings of the National Academy of Sciences 1112(8):2389–2394, doi: 10.1073/pnas.1411678112
XiaotongCui commented 5 months ago

Interesting conclusion! However, I am pondering a question. They pay native speakers to assess, but the same language is spoken by people from many different countries. For instance, Spanish is spoken in Spain, various regions of South America, and elsewhere. Having people from different countries within the same language group assessing might lead to different outcomes. I am curious whether they have mechanisms in place to control for this factor.

Caojie2001 commented 5 months ago

I think the conclusion of the article and the evaluation of the literature considering the 'happiness time series' is interesting. However, I wonder whether the way the researchers measure a word's 'happiness score' can truly extract the meaning of a word, as sometimes the emotion expressed in an identical word may vary dramatically when it's used in different contexts, which tends to be more pronounced in some particular linguistic contexts such as irony.

sborislo commented 5 months ago

I appreciate the authors' methods; however, similar to @Caojie2001's point, I think the authors' conclusion of a "positivity bias" might be based on an unwarranted assumption. Specifically, is it not possible that combinations of words affect the "happiness score" of words in ways that would not be detected using machine learning?

In an effort to be more original, an additional question: The authors claim a general positivity bias in human language, but is this not just shown in media? Can someone truly conclude conversations (arguably the most prominent form of discussion) have a positivity bias? To be fair, I don't know how you'd measure this, so the authors' methods make sense. But maybe their claim could be tempered a bit to match their methods.

anzhichen1999 commented 5 months ago

Given the identified variances in emotional expression across different languages and corpora, as evidenced by the differing happiness distributions and translation challenges outlined in Dodds et al.'s study, how might the reciprocal human-machine learning approach of Fusion be adapted to analyze and interpret these variances? Specifically, how could Fusion, with its capability to integrate diverse human conceptualizations and machine learning, be employed to more accurately capture and understand the nuances in emotional expression, particularly in contexts where direct translation may alter the intended emotional significance of words?

donatellafelice commented 5 months ago

i was troubled by the author's assertion that they covered languages globally without the addition of any non-Arabic African language... I also wondered, how did they tokenize each language? the initial counts of the most common words could be very problematic even between German and English for example (the: adverb, ajective or der/die/das: pronouns that can come in acusative, dative, nominative and genative form). I did not understand how they had explained handling these types of differences across language. In some languages, such words hold much more meaning than in others. I would be curious to know what they removed and what they left in in their initial cleaning of their data.

Twilight233333 commented 5 months ago

I really like the author's research, positivity bias seems to be very common. I would like to know whether the screening function of social media has an impact on this bias due to the use of the social media corpus?

Vindmn1234 commented 3 months ago

In light of your discovery that human languages universally exhibit a positivity bias, how do you believe this inherent positivity influences interpersonal communication and cultural narratives? Additionally, given the consistency of emotional content across languages and the independence of this positivity from word frequency, what implications might your findings have for cross-cultural understanding and the development of international communication tools?

Carolineyx commented 3 months ago

how might this universal positivity bias influence the effectiveness of sentiment analysis algorithms, especially those used in social media analytics, marketing research, or mental health assessments? Additionally, considering the variations in emotional spectrum across languages and the minor differences in happiness distributions observed, how can these insights inform the development of more culturally sensitive computational tools for analyzing sentiment in global communication?