uchicago-computation-workshop / Winter2022

Repository for the Winter 2022 Computational Social Science Workshop
5 stars 0 forks source link

02/17: Robert Axtell #6

Open ehuppert opened 2 years ago

ehuppert commented 2 years ago

Comment below with a well-developed group question about the reading for this week's workshop. Please collaborate with your groups on Hypothesis (via the Canvas page) to develop your question.

One person can submit on the group's behalf and put the Group Name in the submission for credit. Your group only needs to post on assigned week (rotating every other week).

Please post your question by Wednesday 11:59 PM, and upvote at least three of your peers' comments on Thursday prior to the workshop. Everyone in the group needs to upvote! You need to use 'thumbs-up' for your reactions to count towards 'top comments,' but you can use other emojis on top of the thumbs up.

Raychanan commented 2 years ago

Group 1C: Yawei Li, Rui Chen, Val Alvern Cueco Ligo, Yutai Li, Max Kramer

Thank you in advance for sharing your amazing work! They truly showcased the power of large-scale computation for our research. My questions are mostly on your 2016 paper:

edelahayeUChicago commented 2 years ago

Group 1L: Xi Cheng, Elliot Delahaye, Hongxian Huang, Yutong Li

Many thanks for the fascinating paper professor, its something I've been interested to learn more about for a while! We have a few questions:

Firstly, as you rightly discuss at the beginning of your paper, economists tend to explain the general fluctuations of the economy in terms of exogenous shocks (i.e. technology, govt. policy, weather etc.) but your model creates these endogenously. What is the fundamental mechanism for the equilibria not being stable here, is it due to learning-processes with a fading memory? In your handbook chapter you mention combinatorial barriers to the fixed points being realised, is this to say the fixed points exist but the computational algorithm makes them unlikely to be discovered?

The optimisation problem that you model agents as facing is static rather than dynamic (no choice to sacrifice current utility to improve future utility through saving/investment in capital/education. You discuss the computational intensity of running this simulation for 120 million agents, what does this indicate for more general models featuring dynamic investment decisions? Are they intractable for such large simulations or have ABMs been used for these also?

Finally, on a more philosophical level, many pioneers of agent-based modelling have cast them as outside of the mainstream neoclassical approach to economics. Do you agree with this view and if so what do you see as the key differences? It seems that ABMs still follow the standard neoclassical approach of representing agents as utility maximisation problems subject to their constraints. Is the difference computational rather than analytical (i.e. the approach of creating software objects to run the simulation, rather than solving the maximisation problems by hand and solving the resulting equations on a computer?

JoeHelbing commented 2 years ago

Group 1k: Joseph Helbing, Hazel Chui, Jade Benson, Isabella Duan

When one is trying to model even simple systems of 120 million+ individual agents, and even considering the state of modern computing those computational requirements are not trivial. What is the balance between running multiple simulations, further fine tuning while still maintaining robust results? What is the art of the creation of agent based models of this size? Considering that smaller models produced different and non-representative results, it implies that simplifying on size is less useful than simplifying on agent complexity. Is that something you expect as a general rule or specific to this experiment?

borlasekn commented 2 years ago

Group 1E: Kaya Borlase, Shuyi Yang, Zoey Jiao, Xin Li

I am enamored to see that your model managed to recreate an entire multi-firm scenario analytics for the entire market. Usually our manual setups were confined to solve maximization problems that involve entities less than two-digit, while your fabulous creation simulate the entire US economy involving millions, with many simulation results coinciding with the real-world data. This is the truly impressive part and shows the reproducibility of your results, enabling it useful to analyze literally any actual economic case. I am a bit curious about how you may even more improve your model to match with actualities with precision. For example, certain conjectures in the model like many largest firms being relatively young, well reflect the reality of rapidly rising new-economy startups. Considering that you were only calibrating a dozen variables, such correctness is somewhat astonishing for me. Were you tuning the model purposefully to match the empirical findings or the model simply generated all these matched results based on its successful design?

hsinkengling commented 2 years ago

Group 1H: Yuetong Bai, Boya Fu, Zhiyun Hu, Hsin-Keng Ling

Thank you Prof. Axtell for sharing your work with us.

I find it really cool that the simulation is able to reproduce charts and graphs of empirical data with good accuracy. My question is: given that designing a simulation requires some level of theoretical intuition, which usually arises from familiarity with the data beforehand, is the notion of "over-fitting" ever a concern in simulational research? In other words, is the ability to fine-tune simulational models to fit empirical data considered an asset (build better models) or a liability (no meaningful hypothesis-testing) in simulational research?

sdbaier commented 2 years ago

Group 1J: Silvan Baier*, Lynette Dang, Sabina Hartnett, Yingxuan Liu

The 2016 AAMAS paper impressively demonstrates on multiple occasions how closely clean, comprehensible ABMs are able to reproduce convoluted, messy empirical data. How do you avoid such research to be labelled as self-evident with conclusions being clear from the beginning by journal reviewers and more generally, scholars from outside of the computational modelling community?

“Note that no internal structure exists for the firms in our model, since they are simply collections of agents […]” (Axtell, 2018)

How would you operationally incorporate hierarchy in your model (see e.g. James March’s 1991 OrgSci paper) and how would it change the outcomes of the analysis? Was the decision to forgo a multi-level, (even more) more computationally expensive model and instead model organizations as collections of agents driven by simplicity?

javad-e commented 2 years ago

Group 1F: Javad, Sudhamshu, Fiona Thank you Prof. Axtell for sharing your research on creating new steady-state models. We had some questions about the agent-based modeling conducted in this project. As some of us are interested in using similar models in our future research, could you please provide some technical details about how the modeling process works? Particularly, the statistical aspect and the software used? Moreover, we were wondering if researchers can include a larger number of features in this process? Besides effort levels, one could imagine moving patterns to be different across different industries and geographical contexts. An individual’s utility function could also depend on other variables such as type of work, emotional characteristics, firm size, and demographics. Is agent-based modeling capable of incorporating such variables?

ChongyuFang commented 2 years ago

Group 1A: Angelica Bosko, Chongyu Fang, Frederick Zheng He, Yier Ling

Dear Professor Axtell,

Thank you so much for presenting us your research today. Nowadays, using micro-level data and computational methods to explain macro-level outcomes is getting increasingly popular, and your research is really insightful and interesting.

The most eye-catching feature of your model to economists and existing literature is perhaps the absence of exogenous shocks. However, I have a question about how this kind of model can inspire governmental macroeconomic regulations and public policy implications. The model uses only heterogenous agents setting with a set of individual-level parameters and behavior rules, but the individual parameters are sometimes not fully observable in the real world. Besides, we cannot find a way to intervene with the economy, i.e. we cannot give shocks to reach our desired outcomes. Could you please explain how we could model government behaviors to this model?

bowen-w-zheng commented 2 years ago

Group 1I: Yu-Hsuan Chou, Bowen Zheng, Jasmine Huang, Jingnan Liu, Yile Chen

Thank you for sharing your interesting work! Our first two questions are about the details of the model and methodology, the third about balancing realsiticity and tractability, and the last one about the comparison to theoretical models.

(1) Are the functional forms, parameters, and initial conditions tuned a priori, or are they tuned based on empirical data? To what extent are the results sensitive to the parameters and initial conditions? Can we do perturbation analysis like introducing some shocks to the system? Will the response also match empirical data?

(2) In this paper, the internal function of agents and dynamic rules are specified explicitly. Could we use a graph neural network with economics-informed inductive biases to learn the dynamics from empirical data and extract the internal functions a posteriori for analysis?

(3) In the paper, you've shown that we can explain the properties of firms with a relatively simple model specification. With the specification, the firms are simply a team of agents (or say workers), and people in the same group just share the outcome equally. The condition seems relatively unrealistic for large firms. Would compensating productive people more make the model being closer to the real world? Does the research discard this kind of assumption out of the computation complexity concern? How do we analyze the resulting network? Can we use mean-field approximation to get explicit analytical governing equations? In a broader sense, how do you think about balancing the realisticity of the model simulation and computational/analytical tractability?

(4) What are the merits of ABM compared to theoretical models? How do we combine the two so the results could be double validated and how do we interpret the results when the simulation does not match with the theory?

jiehanL commented 2 years ago

Group 1M: Jiehan Liu, Partha Kadambi, Peihan Gao, Shiyang Lai, Zhibin Chen Thank you Prof. Axtell for sharing such novel research! Our question mainly pertains to your 2016 paper: Conventionally, neoclassical economics assumes perfectly rational agents and sees the economy are necessarily in equilibrium, meanwhile agent-based computational economics drops the assumptions of equilibrium and rationality As these two analytic approaches differ from the very basic assumption and preliminary condition, does complexity economics lead to different policies than the one's neoclassical economics advocates? To be more substantive, policy derived from neoclassical economics typically adjusts means of incentive, such as taxes, regulations, quotas, to gain the desired outcome; In general, how should policy change when agents’ endogenous differences in religious attitudes, response, etc. are taken into consideration?

cgyhumble0612 commented 2 years ago

Group 1B: Guangyuan Chen, Pranathi Iyer, Yuxuan Chen, Qishen Fu

Dear professor, thank you for showcasing us your work with so much computational outcomes. Our questions will mainly focus on the contents from the Handbook.

  1. In the execution of model, you said you repeat 120 million agents for the loop. We are curious about how much did the computer burden in computational calculation for this process. Or how much time did it use to complete this task. Why not use Monte Carlo simulation or bootstrapping method to do that? What are the pros and cons for these methods.
  2. In Section 3.2, you illustrate the formation of steady-state equilibrium between agents and firms. We would like to raise a question that how do you think that the innovation plays an important role in this formation progress. Since we can still find many companies exist for a very long time. Should we incorporate this into our model or just classify them to the outliers?
yjhuang99 commented 2 years ago

Group 1D: MengChen Chung, Zhihan Xiong, Zixu Chen, Yujing Huang, Feihong Lei

ABM is an epitome for human societies and behaviors, while somewhat it is still a simplified modeling procedure without capturing all the details and linkages of real-world phenomena. We tend to assume the ignorance of these trivias will not massively affect our inference, though. In the simulation, what details do you think would be computationally infeasible if incorporating them, while they still play a role in the human world so you made simplification? What kinds of caveats do we need to have when interpreting this model?

YLHan97 commented 2 years ago

Group 1G: Tian Chen, Tanzima Chowdhury, Yulun Han, Qihui Lei

Hi Professor Axtell,

Thanks for sharing your research with us. We are very interested in your research, and hope to learn it more during the workshop. Our group has two question as follows:

According to the article “120 Million Agents Self-Organize into 6 Million Firms: A Model of the U.S. Private Sector”, you have mentioned an agency model which described 120 million agents for US private sector. When agents are free to join coalitions where they are made better off there results a steady-state distribution of coalitions. So, what will happen when facing non-steady-state distribution of coalitions? Also, how does the model apply in the real world? Would you please provide more details?

DehongUChi commented 2 years ago

Group number: 1N: Henry Lin, Dehong Lu, Alfred Chao, Naiyu Jiang, Qiuyu Li

Thank you for choosing to come and share with us about your research, this paper truly opens our eyes to new and exciting approaches in exploring traditional topics in economics. We have several questions:

  1. In your research, the simulation consists of 120 million agents and the computational power required to run this simulation is therefore huge. Do you think that reducing the number of agents in exchange for more computational power to add in more variables of interest such as saving and investment would be a favorable approach?
  2. What do you think are the limitations of the ABM used in this paper? What are some of its tradeoffs in comparison with the traditional models? Which aspects of this model do you think can be improved? What would you add to the model if more computational power is available? Thank you very much.