Open lkcao opened 10 months ago
A key challenge lies in interpreting the results in language models: does it reflect the bias inherent in the model, or the model's perception of existing human biases (due to the imbalanced data regarding intersectional biases)?
I am interesting how we can further make comparison or alignment between human and machine’s modeling and usage of language. Are there some more complex or nuanced pattern that can be captured by algorithm?
I feel it's quite important for linguists to get in and decode the semantic features. For example, how can we define different style other than the semantic features? Sometimes it's just a vibe of style.
Post questions here for this week's exemplary readings: