UChicago-Computational-Content-Analysis / Readings-Responses-2024-Winter

2 stars 0 forks source link

2. Counting Words & Phrases to Trace the Distribution of Meaning -[E4] Stuhler, Oscar. #51

Open lkcao opened 6 months ago

lkcao commented 6 months ago

Post questions here for this week's exemplary readings:

  1. Stuhler, Oscar. 2021. “What’s in a category? A new approach to Discourse Role Analysis.” Poetics, 88:
Twilight233333 commented 6 months ago

I think the author's analysis of the German word refugee is very insightful, and the data analysis is also compelling. What I'm curious about is whether in each case we can find a set of words that indicate different tendencies for comparative analysis, and whether the use of words reflects perceptions at the media level or at the citizen level? When an individual citizen chooses to use one word over another, does that citizen fully understand the meaning of the word he has chosen?

Caojie2001 commented 6 months ago

I think that the inclusion of syntactic relations realized by utilizing dependency parsers in this article provides an interesting perspective for content analysis. Compared with relationships between entities of interest and other vocabularies, the syntactic location of an entity is a more objective attribute and, thus, more resilient to evolving discourse role systems. My question is whether it's possible to apply dependency parsers to texts of languages other than English and German, especially those analytical languages such as Chinese. In Chinese, words are organized in a rather loose and irregular manner. For example, many nouns in Chinese are exactly the same as their verbal counterparts, and the use of particles is far less common in Chinese than in English. From my perspective, these characters might pose obstacles to the construction of the dependency parsers.

bucketteOfIvy commented 6 months ago

This question isn't very deep / is slightly askew of the reading, but I was surprised to see that the authors of this paper had such a large number of clusters result from their analysis (21). Reading their supplemental text, it appears that they found that 17 clusters were statistically optimal. These numbers feel incredibly large relative many of the cluster analyses I've seen in the past, which tend to have around 3-7 clusters in total. Are large numbers of clusters a typical characteristic of cluster analyses on textual data? If so, are there standard ways to reduce the resultant number of clusters to make them more manageable?

Dededon commented 6 months ago

Stuhler's paper is my most favorite paper among this week's readings. I'm deeply agreeing with Stuhler's philosophy, that every computational method design should have roots in sociology theories, if their purposes are for social science research. The methods don't need to be complicated, but should be able to echo with pre-existingsociological theories. This paper's RQ is simple: what is the shifting of meaning of the word refugee in current German newspapers. Compared with the Word2Vec approach, I'm more persuaded by the method of discourse role analysis, as it provides more clarity in inter-word relations. Drawing from Kieran Healy's point (Healy 2017), "nuances" or multidimensiality is not a good practice in sociological research, and Word2Vec is suffering from such issue. References: Healy, Kieran. "Fuck nuance." Sociological Theory 35.2 (2017): 118-127.

hongste7 commented 6 months ago

I'm wondering to what extent the authors from the "Echoes of Power" paper rely on their corpus to find the results they did. The chosen environments of Wikipedia discussions and U.S. Supreme Court oral arguments are unique in their structure and dynamics. Wikipedia's collaborative and open-source nature fosters a specific type of interaction, while the Supreme Court's formal and hierarchical setting imposes its own communication norms. These environments inherently emphasize power dynamics, which might be less pronounced or manifested differently in other contexts. For instance, in a less formal or less hierarchical setting, language coordination might not be as strongly linked to power dynamics but could instead reflect collaboration, social conformity, or cultural communication norms.

runlinw0525 commented 6 months ago

Thinking about how Discourse Role Analysis was used to study the portrayal of refugees in German media, I'm curious: Could we use a similar approach to look at how the media in other countries talks about different social issues or groups? How do you think it would work in those different contexts?

Carolineyx commented 6 months ago

Would it be possible to use this method if I want to analyze the entire description of how people define a certain word related to identity? For example, I aim to understand how individuals define their identity (who they are in the society) between the ages of 18 to 35.

yueqil2 commented 6 months ago

The authors clarifies several times that discourse roles are latent construct (one of the crucial theoretical considerations). What difference do "construct" make to this project? Why it matters?

Brian-W00 commented 6 months ago

How does the new approach to Discourse Role Analysis proposed in this paper advance our understanding of the dynamic and evolving nature of identity categories in media discourse, particularly in the context of refugee coverage?

JessicaCaishanghai commented 4 months ago

How does the proposed Discourse Role Analysis (DRA) method utilize Natural Language Processing techniques to identify and differentiate various identity categories within discourse, such as the nuanced representations of refugees in German news media?