UChicago-Computational-Content-Analysis / Readings-Responses-2024-Winter

1 stars 0 forks source link

4. Word Embeddings to Explore Meaning Spaces- [E2] Aceves, P. & Evans, J. #39

Open lkcao opened 8 months ago

lkcao commented 8 months ago

Post questions here for this week's exemplary readings:

  1. Aceves, P. & Evans, J. 2023. “Mobilizing Conceptual Spaces: How Word Embedding Models Can Inform Measurement and Theory Within Organization Science”. Organization Science.
donatellafelice commented 7 months ago

it may be a simple question but regarding the possible biases in source material, i thought the point about how the biases were required for the analysis, as without them a crucial social and cultural characteristics may be missing was very interesting. Could you expand on what is meant by "to the degree that understanding the nature of conceptual associations within communities and contexts of study is central, researchers will require these biases for analysis. If they were not included, the model, and therefore the research design, would miss critical social and cultural regularities that characterize their context of study." Does this mean that biases must be included or acknowledged and accounted for? at what point do you know they have been accounted for, given the inevitable black-box nature discussed

chenyt16 commented 7 months ago

This paper aims to provide a practical roadmap for using embedding models in research and offer a theoretical guide to evaluate and conceptually link these models with theoretical significance. There are several neural layers in the embedding process. If we add some activation function between these layers, will the model perform better?

Marugannwg commented 7 months ago

I'm more interested in how to get different spaces using the embedding from the same corpus. Regarding this introduction the conceptual space diagram: "By taking the centroid vector of each person’s word vectors, we can arrive at each inventor’s position within the conceptual space of innovation."

Does that mean we can take more levels of "document" vector and reveal more about the data structure?

ana-yurt commented 6 months ago

I am interested in whether we can use word embedding to measure meaning, but more higher-level narratives. For example, can we detect references to certain news narratives in the text?