UChicago-Computational-Content-Analysis / Readings-Responses-2024-Winter

1 stars 0 forks source link

6. Large Language Models (LLMs) to Predict and Simulate Language - [E2] W. Guo, A. Caliskan. #25

Open lkcao opened 8 months ago

lkcao commented 8 months ago

Post questions here for this week's exemplary readings:

  1. W. Guo, A. Caliskan. 2020. “Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases.”
volt-1 commented 7 months ago

A key challenge lies in interpreting the results in language models: does it reflect the bias inherent in the model, or the model's perception of existing human biases (due to the imbalanced data regarding intersectional biases)?

icarlous commented 7 months ago

I am interesting how we can further make comparison or alignment between human and machine’s modeling and usage of language. Are there some more complex or nuanced pattern that can be captured by algorithm?

JessicaCaishanghai commented 6 months ago

I feel it's quite important for linguists to get in and decode the semantic features. For example, how can we define different style other than the semantic features? Sometimes it's just a vibe of style.