Open HyunkuKwon opened 3 years ago
This is quite the interesting paper. While their analysis and results on CNN comments seem to be consistent with their general results (i.e. bad mood induces trolling, prior trolling increases prop of future trolling, etc) I wonder if there might be a selection bias. By focusing only on CNN commentators they are restricting their analysis to a small subset of the population. What rule of thumb, heuristics should we seek to meet when designing experimental paradigms to ensure we are minimizing the probability of selection bias?
I have two concerns:
I agree with @Raychanan that the classification of trolling is very subjective--looking at the examples provided of trolling behavior, there are a few that I'm not sure I would have characterized in the same way. I think that might lead to interesting areas for future research, though, as these different kinds of behaviors could be further distinguished. In comparing different behaviors classified as trolling, such as swearing, off-topic comments, and sarcasm, are there differences in the factors that influence them? Are the same or different people responsible for these various types of posts?
To what extent do the results vary on different media outlets, e.g., The Washington Post, Quora, etc.?
Are there any other factors leading to antisocial behaviors besides the two mentioned in the paper? I am thinking about the tone or some specific languages of the posts.
Also, the relationship between posters and respondents may influence the extent of antisocial behaviors. Some researchers pointed out a potentially powerful effect of social distance in exacerbating online gender bias, considering group diffusion of responsibility can lead to increased dehumanization. Do you think this will cause antisocial behaviors, too? Thank you.
Extending from @zshibing1 's question, I am curious if the design of forums (especially whether a forum allows anonymity or not) affects the extend of trolling. My guess is that forums that are completely anonymous (e.g. ejmr, sjmr) are likely to have more troll posts than social media comment sections where each user's identity is verified (e.g. facebook, twitter).
I would take mood and context as secondary causes of trolling. Personally, I would say the content of articles/discussions and personal characteristics are the primary motives. Will it be possible to compute big-five scores using Wu et al. (2015) and explore the relationships between personalities, the content of articles/discussions, and trolling behavior?
Youyou, W., Kosinski, M., & Stillwell, D. (2015). Computer-based personality judgments are more accurate than those made by humans. Proceedings of the National Academy of Sciences, 112(4), 1036-1040.
As like some others in this thread, I'm a little concerned over some of the methodologies employed here - such as author annotations (as opposed to anonymous third-party evaluations). Some of the paper's arguments seem like a stretch - 'anyone can troll - it's just dependent on their mood' - which seems like an unjustifiable conclusion given their results and methodology.
That said, it was an enjoyable and interesting read. I'm curious as to whether we can use NLP and ML to classify posts automatically as troll posts - is there any work published relating to this?
(1) Might the group of users who would comment on CNN.com a special group of users, which might not represent the overall online user population?
(2) Since the moderator will ban some of the troll users, I wonder if they may open a new account and come back as a "naive user." Will this be a latent problem for this paper? Also, might bot users, paid trolls be a problem here?
(3) It would be interesting to consider if the appropriation of trolling behavior nowadays increase the probability for a user to produce a troll post after bad mood or bad context.
Like a lot of others, I was not convinced by the authors use of 'trolling' vs 'non-trolling' as a classifier (and their definitions of trolling) but would be really interested to see a more complex evaluation of the definitions which they included within trolling (especially off topic comments). But in the context of common [esp. media] rhetoric that 'trolls' and bots are producing 'all trolling content online' it is a really revealing study that, in fact, cumulatively, regular 'non-trolls' also contribute a large component of this.
I would also be really curious to see if there is alignment of user 'interests' with the types of discussions in which they do and do not troll? (example: perhaps for many people this is politics, and so the level of aggression in previous comments may enable an already existing tendency to 'troll' on a news article about presidential candidates)
Although I don't think trolling is innate, I do agree with the authors that "trolling can be contagious and that ordinary people, given the right conditions, can act like trolls". However, I feel like those who participate in commenting are already a highly selected group. I assume they somehow pursue interactions more often and have more "opinions" to speak out. In this case, trolling behavior is indeed inspired by a discussion context. I would be more interested in how to identify those who seldom comment but suddenly get into trolling. What makes them troll and how to identify them and explain their behaviors?
I wouldn't challenge a lot on the definition of trolling, as I think the important point of this paper is exploring the cause of certain behaviour within the definition given by the paper itself.
The conclusion and methods used by the paper is very intuitive. It also points toward directions for designing better online discussion forums. I am curious to learn how the authors identify the causal effect of bad mood and environment on trolling behaviors. Thanks!
I found the methods used in this paper appropriate and easy to understand. I am wondering, however, if these methods are conducive to picking up on more subtle forms of trolling (e.g., posts that aren't blatantly unrelated to the article or don't contain outright hate speech). I also wonder whether different types of posts merit different degrees of trolling. It seems like these classification strategies could be used to keep hate speech from actually getting published on online forums. I think in asserting that trolling is innate but can spread, the authors assume the counterfactual to being a troll is commenting appropriately, when I think the counterfactual may actually be saying nothing at all. If someone who is likely to troll sees a lot of trolls on a post, they may be more likely to join in, whereas if there were no trolling comments, they may refrain from posting altogether.
This paper provides us with a great way to conduct psychological research with text data. Since the main platform here is CNN, my question is whether the same approach employed in the paper can be applied to social platforms like Reddit or Twitter?
This is a very interesting paper, and I think it feels intuitive that both mode and (even more) ambient trolling, can result in trolling behavior. My question is: how realistic is the way they simulate/impose mood in their experiment? How similar can this be to real-life mood variations?
Post questions about the following exemplary reading here:
Cheng, Justin, Michael Bernstein, Cristian Danescu-Niculescu-Mizil and Jure Leskovec. 2017. “Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions.” WWW 2017: 1-14.