uchicago-computation-workshop / Spring2021

Repository for the Spring 2021 Computational Social Science Workshop
3 stars 0 forks source link

04/01: Ana-Andreea Stoica #1

Open ehuppert opened 3 years ago

ehuppert commented 3 years ago

Comment below with questions or thoughts about the reading for this week's workshop.

Please make your comments by Wednesday 11:59 PM, and upvote at least five of your peers' comments on Thursday prior to the workshop. You need to use 'thumbs-up' for your reactions to count towards 'top comments,' but you can use other emojis on top of the thumbs up.

rkcatipon commented 3 years ago

Hi Dr. Stoica, thank you for kicking off our lecture series for this quarter! I've always been intrigued by the use of homophily in both social science research and in social media platforms. The researchers who coined the terms, Merton and Lazarsfield actually excluded the survey results of black residents in their original project. Despite the fact that black residents actually liked integrated housing, because the researchers chose to include white residents who stated their preference for segregation, the researchers concluded that ‘like associates with like’. Despite the original data collection-bias, homophily is now everywhere and widely accepted as your paper states. So why am I dredging up old history on a sociology term that is the norm? I guess, I'm trying to better understand whether homophily existed within the populations on social media networks or was it assumed by the technology and then enforced by platforms? I think your work does a great job demonstrating the bias in a machine system and I really enjoyed learning about your methods.

alevi98 commented 3 years ago

Hi Dr. Stoica,

Thank you for coming to our workshop :) !! As a math person myself, something that really strikes me is how in-depth and thoughtful all of your graph metrics and theoretical models are. I really hope more pure math people venture into the social sciences and even computer science in the coming years.

The second paper seems to provide a clear theoretical impetus for data justice. With the first paper, something I noticed (and might have missed something obvious)was that you include the term "efficiency" from the outset. Throughout the paper, there is reference to this central question of whether or not diversity seeding impacts an eventual goal of "efficiency." For one, I would be interested in hearing how exactly you are defining efficiency?

Second, why did you choose "efficiency" as the concept in question diversity seeding either improves or detracts from? Some might make the argument that diversity should not serve "efficiency," but rather is important for its own sake. That we should strive for diversity, equity, and inclusion because we should be centering human rights and validating people for the sake of their validation, not because it serves the efficiency, nor because it serves a profit motive or external purpose, such as the pursuit of aggregate growth (which some might say is unsustainable). How do you respond to this argument? Do you think it is valid to promote diversity for its own sake, or do you think diversity inherently must answer to efficiency?

JadeBenson commented 3 years ago

Dr. Stoica, thank you so much for this inspiring research. I am very interested in how computational methods can be applied to further social justice and your research demonstrates both a careful description of the exact problems and proposes win-win solutions. I want to hear more about the next steps after this research and the behind-the-scenes.

I'm curious about how you introduce this research to the relevant stakeholders who could alter their algorithms? Do you approach Instagram with your glass ceiling research? Vaccine distribution groups with your seeding network paper? This work seems so incredibly important and both improves equity and efficiency; I would want everyone to respond accordingly. I think publication and presentations like this one are a crucial steps in this process, are there any other actions you're taking and, perhaps optimistically, anything we can do as upcoming computational social scientists to help?

Also, in the development of these projects, do you work with interdisciplinary teams that help explore different facets of the problems? Do you see a place for this type of collaboration if it's not the current situation?

jinfei1125 commented 3 years ago

Dear Dr. Stoica, Thanks for sharing these interesting papers with us! I feel the enjoy the study of 'social influence maximization` because nowadays we all face similar scenarios and I think many concepts in the biased preferential attachment model such as homophily and rich-get-richer make a lot of sense.

I am a little confused about the difference between agnostic seeding, diversity seeding, and parity seeding, though they are accompanied by mathematical definition in the article. Can you explain them more in your presentation?

Also, in your work, you mention that:

Our results contribute to recent evidence suggesting that sensitive attributes should not be ignored but can be leveraged to simultaneously improve fairness and accuracy

Can you give some examples of these sensitive attributes?

Lastly, I feel like the concept of 'early adopter' is similar to KOL (Key Opinion Leader) in advertising? Please correct me if I understand this wrong.

vinsonyz commented 3 years ago

Dear Ana-Andreea, Thank you so much for your presentation! My question is that why you prefer a dynamic model instead of a static model.

ydeng117 commented 3 years ago

Dear Dr. Stoica, I agree that our online social network can mirror the actual inequalities and may also amplify such inequalities. Do you suggest that we should apply affirmative action when we are designing the online social algorithm? When dealing with issues like hate-speech or hate-crime against the minority, such as the recent Asian hate incidents, how can we utilize our computational algorithm to improve the situation?

jsoll1 commented 3 years ago

Hi, I'm excited for your presentation tomorrow! Do you think it's possible to predict what kinds of externalities an algorithm will have before it's implementation? What kinds of inequities do you think are important to watch out for?

luxin-tian commented 3 years ago

Hello Dr. Stoica, thank you very much for sharing. In the real world, it is quite often that the designers and developers of the algorithms in tech companies have insufficient incentives to reduce biases and inequalities, and sometimes the profit-maximizing objective is unfortunately aligned with not taking the humanity side into considerations. How could your research help with enhancing the argument in regulations and legislation?

skanthan95 commented 3 years ago

Thanks for presenting at our workshop! I'm curious about how we'd bridge the gap you describe in recommender systems (basically, what the next steps would be in applying this research practically / swaying major streaming platforms to make some major, but justified, algorithmic changes). Also, I think it's great that you included a glossary before your introduction in the 2020 paper, really helps with accessibility.

bowen-w-zheng commented 3 years ago

Hi Dr. Stoica,

Thank you for the amazing work! I really appreciate how you make a strong case for diversity even subject to some efficiency constraints. Though I think diversity has its intrinsic value outside the scope of efficiency, on a practical level, this result has a better chance of pushing certain actions forward.

I have been thinking about the relationship between affirmative actions and efficiency. If we think of efficiency as an optimization problem and each hiring position as an update in the solution space, affirmative actions seem to be a good strategy to pursue the optimizer. Repeatedly hiring from the same group might lead to a local optimum but not a global one. Firstly, a homogenous group of solutions could mean that we are taking small stepsizes that will not get us out of the local optimum. Secondly, the agent/algorithm might observe only a subspace of the feature space and only update within that subspace, especially when the problem is complex and relevant features space is high-dimensional. Affirmative actions would be similar to parallelize the algorithms with relatively far apart initialization and might actually increase efficiency!

My intuition here is imprecise and potentially very biased. Looking forward to hearing more about your methods and how you construct a precise and unbiased model!

FranciscoRMendes commented 3 years ago

Hi Dr. Stoica

I think people have raised this issue before me, but I am also curious as to why efficiency is an end goal for diversity. Why not diversity for it's own sake? How do you define efficiency?

a-bosko commented 3 years ago

Hi Dr. Stoica,

Thank you for sharing your papers and presentation with us! It is interesting to learn about social recommendations on social media and the algorithms that underly these features. Importantly, it was eye-opening to learn about the algorithmic glass ceiling that hinders different groups from achieving equal representation.

In the article "Algorithmic Glass Ceiling in Social Networks", the conclusion is that while algorithms do not automatically create disparity, they can contribute to the worsening of inequality. There is also mention of how differential homophily can contribute to the reversal of the "glass ceiling" effect. I was wondering if you could explain more about how we can achieve differential homophily in different populations, and possibly in society as a whole.

WMhYang commented 3 years ago

Thank you very much for sharing your work. I am not familiar with this field, but I find the idea of diversity and efficiency trade off interesting. I was wondering if it is always possible to ensure that diversity and efficiency are complements of each other in the real world. Could we modify the seed size to achieve the target? I apologize if the question looks naive and thanks again for the papers.

linghui-wu commented 3 years ago

Thank you for bringing us such exciting work, Dr. Stoica! I believe there is multiple scholarly research on how the recommendation algorithm would amplify the socioeconomic inequality across different online platforms. I am interested in the diversity and efficiency trade-off you emphasize in the study and like @luxin-tian has mentioned before, what do you think might incentivize the firms to eliminate the underlying biases?

mikepackard415 commented 3 years ago

Hi Dr. Stoica, thank you for sharing this really interesting work with us! I'm curious about the extent to which you think your results in these papers map onto society at large, beyond social networking. Network structure (clustering, homophily, etc.) may be most observable in social media, but these patterns likely existed in human relationships prior to the last 15 years. If Instagram popularity can be analogous to wealth, would you venture to say that the results of your paper likely apply to entrepreneurs of majority/minority groups (with varying levels of homophily) operating in the real economy? For example, a quick google search tells me that at the end of 2020, only 7.8% of the S&P 500 companies had female CEOs. Does the glass ceiling you study in these online contexts map onto this glass ceiling?

Dxu1 commented 3 years ago

Thank you for sharing your exciting work, Dr. Stoica! You have demonstrated the importance of diversity of network structure on efficiency using real-world data related to gender inequality. I am curious if you have also observed similar effects using data related to other inequality (e.g. race).

chrismaurice0 commented 3 years ago

Thank you for sharing your work with us! I am wondering what changes you think need to happen in the social influence space to account for the biased trends your research shows.

Leahjl commented 3 years ago

Hi Dr. Stoica, thank you for coming to our workshop! I'm curious about what actions could be made to reduce the inequities in social media.

yutianlai commented 3 years ago

Thanks for coming! I'm wondering how the study could be applied in industry.

MkramerPsych commented 3 years ago

Dr. Stoica,

Thank you for presenting your work to us! I definitely echo my cohort mates' question of efficiency as a target for increasing diversity in social networking. As I am not particularly aware of the social network literature, I also am curious about the implementation of social networking research conducted in universities. Do you find that research conducted outside of the major social network R&D departments actually is employed to improve their algorithms? Are there ethical dilemmas between 3rd party researchers aiming for diversity while in-house researchers have applied pressure from the companies?

MengChenC commented 3 years ago

Thank you for sharing the frontier research. I am really interested in the bias from artificial intelligence. Bias sometimes comes from algorithms, while many situations they come directly from the data per se. I am wondering how we can manage the trade-off between cleaning data/eliminating bias and capture underlying data structures/patterns of bias. That is, if our ultimate goal is to have "clean"(unbiased) data for analysis, we cannot identify the information in the underlying structure of societal data, hence how can we adjust our analysis for this issue? Thank you.

sabinahartnett commented 3 years ago

Dr. Stoica- thank you for sharing and presenting your work!

I am hoping you can speak a bit more to some of the anticipated results of these studies and user awareness of some of your results. Do users recognize glass ceilings or algorithmic biases in the way you were able to observe it using these complex methods and models?

egemenpamukcu commented 3 years ago

Thank you for sharing your work. It has been shown several times that algorithms reflect real-world bias. Do you think we can look at these algorithmic biases as simple projections of our physical world in the digital domain that needs to be addressed by real life policies, or do you think starting combatting these algorithmic biases can significantly help in reducing the real life disparities? If so, can one go one step further and argue that reversing these algorithmic disparities so that digital world is biased the other way around would be helpful in achieving social justice? Sounds problematic and impractical but I would be interested to hear your thoughts.

TwoCentimetre commented 3 years ago

I do not get it why we care the diversity of a social network? I mean if people all stay in the network with those who are similar to them or like them, there would be a lot more fun than getting surrounded by those with different opinions or those who do not care them. I think a diversified network is where argues and hatred speeches start.

NaiyuJ commented 3 years ago

Thanks for sharing these wonderful works! The visualizations are very clear and straightforward. I'm quite curious about how would you think of the trade-off when we take the profit-making perspective in most tech companies into account.

lyl010 commented 3 years ago

Thanks for coming, Dr. Stoica. Very interesting topic. I would like to know more details on how efficiency and diversity are defined and why efficiency and diversity could be a trade-off in the highly clustered and scattered communities? Thank you!

adarshmathew commented 3 years ago

Thank you for presenting your work, Dr. Stoica. I don't have much to ask yet, since the methods you provide are tempting me to go back and redo all the experiments and network measurements I've done for my thesis, and I don't want to encourage that just yet. Looking forward to your talk and how you went about your problem setup.

shenyc16 commented 3 years ago

Dear Stoica, thanks for presenting your inspiring work with us. The research over intricate relationship between diversity and efficiency is of great significance. I am still trying to understand the definition of efficiency in your context and I hope you can elaborate more on it in the talk. Also, I am wondering whether the bias of an AI mentioned in the paper is spontaneously produced by algorithms or deliberately designed by human.

william-wei-zhu commented 3 years ago

Hi Dr. Stoica, Thank you very much for sharing your research with us. We look forward to your presentation.

romanticmonkey commented 3 years ago

Thank you so much for your presentation, Dr. Stoica! Do you think these sensitive features will soon be adopted by major social media platforms? If not, what do you think would be the hindrance?

kthomas14 commented 3 years ago

Thank you for sharing your research with us, Dr. Stoica! I would like to ask about the implications that you discussed in the second recommended paper about glass ceilings in social networks. I was wondering about what implications your findings may have on younger audiences. Do you see potential concerns regarding children's exposure to these biased AI algorithms?

YaoYao121 commented 3 years ago

Dear Ana-Andreea, thank you so much for your presentation! I really enjoy your research discussion. I think this is a very interesting problem and is closely related to our daily life. I am very curious about the relationship between diversity and efficiency. Could you elaborate more, please? Thanks!

lulululugagaga commented 3 years ago

Thanks for sharing! As social media platforms have different traits and serve for different people, what adjustment do you think we should make when exploring the network influence on different platforms?

MegicLF commented 3 years ago

Thank you so much for sharing your research with us. I am also curious about how you define "efficiency" and why you choose it in the first paper.

nwrim commented 3 years ago

Thanks for coming to our workshop and sharing your work! As many of my peers noted, I am curious how this research in the academic sector can influence the actual work in the private sector, where the problems and bias in social networks that you identified are happening in the wild.

YileC928 commented 3 years ago

Thank you Dr. Stoica for sharing your research with us. I hope to learn the practical implications of your study. Looking forward to the workshop.

chun-hu commented 3 years ago

Thanks for coming. Could you tell us a bit more about the importance of diversity in social networks?

luyingjiang commented 3 years ago

Thank you for sharing. My question is about how can we reduce the inequities in social media. Could you elaborate more on this, please? Look forward to your presentation.

heathercchen commented 3 years ago

Thank you for your presentation! My question is can diversity interventions be viewed as affirmation actions on the Internet? If so, are there any limitations with this kind of method? Thanks!

boyafu commented 3 years ago

Thanks for sharing your research! I am interested in how industrial applications could play a role in reducing inequalities.

Yilun0221 commented 3 years ago

Thank you for the presentation, Dr.Stoica! I am excited about the topics. My question is that, similar to what you mentioned in the paper, how the conclusions can be generalized to other data sets and how algorithms with higher complexity can be developed faced with new data sets?

RuoyunTan commented 3 years ago

Thank you for sharing your work with us. I agree that it is very important to mitigate the negative impact of algorithms. Could you provide some comment on networking/communication apps like the Clubhouse that went viral and how this reflects the formation of social networks and how people view social networks?

YanjieZhou commented 3 years ago

Thanks very much for your presentation! I have been delving into the concept of homophily in the area of consumer psychology and very happy to see its wider applications in social science areas.

ghost commented 3 years ago

What is the rationale behind data collection starting from the founder of Instagram?

mingtao-gao commented 3 years ago

Thank you for sharing your work! My question is related to how social network applications can be designed better to reduce inequalities discussed in your paper.

caibengbu commented 3 years ago

Thank you for your presentation! My question is can diversity interventions be viewed as affirmation actions on the Internet? If so, are there any limitations with this kind of method? Thanks!

Jasmine97Huang commented 3 years ago

Really looking forward to your presentation! I am interested in how content analysis can help demonstrate the effects you described with network data.

cytwill commented 3 years ago

Thanks for your presentation. I am actually quite interested in the algorithms you have put forward in the paper. And especially, some of the mathematical functions look quite complex and I am wondering if you could provide some intuitions behind the establishment of these formations?Also, I would like to know if these algorithms have any sensitivity to the platform environment or the content where it is used? And how well these methods could achieve a balance between business and disparity reduction, which would decide its possibility to be generalized to real platforms?

JuneZzj commented 3 years ago

Thank you for presenting in advance. I am curious about the relation between degree centrality and the greedy algorithm you applied in the paper. Since you mentioned degree centrality provides interesting and approximate greedy for a small probability p, do you want to explain this a little more?

xzmerry commented 3 years ago

The topic is really meaningful. Thanks for your presentation! I do think that in the near future, information segmentation and underrepresentation of some groups would be a very important social problem ( As you have mentioned, algorithmic glass ceiling and less information diffusion based on demographic attributes). Looking forward to your representation! And hope that you could explain more about how the diffusion of message and glass ceiling are measured.