UChicago-Computational-Content-Analysis / Readings-Responses-2024-Winter

1 stars 0 forks source link

5. Machine Learning to Classify and Relate Meanings - [E2] Cheng, Justin, Michael Bernstein, Cristian Danescu-Niculescu-Mizil and Jure Leskovec. #32

Open lkcao opened 8 months ago

lkcao commented 8 months ago

Post questions here for this week's exemplary readings:

  1. Cheng, Justin, Michael Bernstein, Cristian Danescu-Niculescu-Mizil and Jure Leskovec. 2017. “Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions.” WWW 2017: 1-14.
chanteriam commented 7 months ago

I found this article extremely interesting and pertinent. With the next presidential election cycle happening this November, I wonder how generalizable the methods used in this paper are to other forms of online interactions. In particular, is there a relationship between a person's perception of how likely it is their candidate of choice will win, and their willingness to spread fake/ incorrect/ inflammatory information about the opposing candidate?

bucketteOfIvy commented 7 months ago

This is a really good study, and I'm fairly convinced by the results that trolling is often just the result of mood and situation instead of something innate to users. However, one of the main controls that the author's use to ensure that users are not innate/repeat trolls is post history, but I'm suspicious this might over estimate the number of unique users. In particular, it's possible that users make "throwaway" accounts before making trollish posts, with the expectation that the accounted used is likely to be banned by moderation. This impact could be increased in their CNN case, where they were looking at posts that were flagged by the community. It's doubtful that this issue, if present, would be widespread enough to undo the study's result, but it still begs the question: are there methods that could be used to detect burner accounts trolling?

ethanjkoz commented 7 months ago

I enjoyed reading this paper but I have a few questions regarding its overall design. I am a bit confused as to how they measured or found "troll" posts. Was there a systematic way in which the authors collected abridged "troll" posts from Reddit and CNN? Furthermore I question whether their simulated discussion board mirrors the development of "troll" posts in real online discussion boards. In the experiment, users are required to participant in the discussion in some way, which might perhaps increase their likelihood of putting "troll"-like answers when in a negative mood whereas they might not have posted at all in a real life context. How could we design a study that more accurately reflects the natural building of discussions and the pop-up of troll comments?

volt-1 commented 7 months ago

The article mentions that negative emotional states can increase the likelihood of a user becoming an online troll. I would like to know if there are specific emotional states (such as anger, frustration, or helplessness) that are more closely related to trolling behavior. Additionally, does this kind of emotion-driven trolling behavior differ in its manifestation from trolling motivated by other reasons (such as seeking attention, playing pranks, or influencing social dynamics)?

chenyt16 commented 7 months ago

This study is interesting! It combines online experiments and observational content analysis. And both of them are compelling. But I doubt whether 'innate' and 'situational' can be considered as two mutually independent categories. In other words, should we define situational factors as a mediator or as a moderator? Perhaps some people are simply more prone than others to vent their emotions on online communities when in a bad mood or are more easily provoked by trolling comments from others. While a specific instance of their behavior may be situational, their actions can still reveal a constant pattern.

joylin0209 commented 7 months ago

Given the finding that the presence of troll posts in a discussion increases subsequent trolling, what specific aspects of discussion context (e.g., topic sensitivity, anonymity level) were found to exacerbate or mitigate this effect?

Marugannwg commented 7 months ago

These are the key features of the classification model:

A model combining all those achieved a .78 accuracy at predicting flagged posts --- How good is this score? With only the context information (topic and recent posts), the accuracy can also be .74, does that mean other factor are less predictive?

Compared to getting an answer, I'm more curious about how to dissect and analyze the questions above.

naivetoad commented 7 months ago

How can online platforms design interventions to mitigate the impact of negative mood and discussion context on trolling behavior?

Vindmn1234 commented 7 months ago

How do we define trolling behavior for the purposes of this study? Given the subjective nature of what constitutes trolling, how could we standardize this definition to ensure consistent identification across the experiment and the longitudinal analysis?

QIXIN-LIN commented 7 months ago

I'm curious about whether online communities are taking steps to manage troll behavior. Given that heated discussions can sometimes be desirable within a community for their ability to engage members, is it possible that some communities might actually encourage heightened emotions and even promote "troll behavior"?

Brian-W00 commented 7 months ago

How can online platforms implement the study's findings on mood and context influencing trolling behavior to develop more effective moderation tools or community guidelines?

runlinw0525 commented 7 months ago

Based on the paper's findings, how can online communities and platforms adjust their design or moderation strategies to lessen the impact of negative moods and trolling on user interactions? What measures can be taken to decrease the likelihood of ordinary users engaging in trolling due to contextual or emotional triggers?

ddlxdd commented 7 months ago

As a gamer, I encounter trolling frequently, sometimes even in every game session. This negativity quickly agitates me, especially when trolls aim to ruin the experience. The article seems to define trolling primarily as irrelevant or off-topic negative posts, allowing for an open discussion where participants don't have to fear for their identity or position. However, in reality, trolling often involves direct attacks on individuals, rather than impersonal figures. Does the study's findings apply to these more personal forms of trolling?

YucanLei commented 7 months ago

The study primarily focuses on a specific online news discussion community and may not fully capture the diversity of online platforms and user behaviors. It would be beneficial to explore whether the findings hold true across different types of online communities and discussion platforms.

HamsterradYC commented 7 months ago

For some platforms might inadvertently promote trolling by emphasizing controversial comments to increase user engagement.it How do we adequately consider the impact of social media platform design and algorithms on trolling behavior.