uchicago-computation-workshop / Fall2019

Repository for the Fall 2019 Workshop
12 stars 1 forks source link

11/07: Gary King #9

Open smiklin opened 5 years ago

smiklin commented 5 years ago

Comment below with questions or thoughts about the reading for this week's workshop.

Please make your comments by Wednesday 11:59 PM, and upvote at least five of your peers' comments on Thursday prior to the workshop. You need to use 'thumbs-up' for your reactions to count towards 'top comments,' but you can use other emojis on top of the thumbs up.

rkcatipon commented 5 years ago

Dr. King, thank you for sharing your research and taking the time to speak to our program! I enjoyed reading your articles. I'd like to, however, ask a few questions outside of these articles about the larger state of Computational Social Science research and social networks.

Given your model of research partnerships with Facebook, and the acquisition of your company Crimson Hexagon by Brandwatch, what are your views on the increasing Congressional scrutiny on social media platforms? From a researcher's perspective, do you agree or disagree with the recent calls to trust bust these corporations? What kinds of effects do you think these legislative efforts will have on academic research done on these platforms, for example, do you foresee a potential chilling effect or increased collaboration?

Yilun0221 commented 5 years ago

Thanks for the sharing and I'm really looking forward to seeing Dr. King this week! I think gerrymandering is a very interesting topic in politics, and I have few questions about this topic after reading the articles.

I believe that algorithms and other computer technologies can promote the fairness of political life, but I think that the criteria for constituency demarcation are difficult to formulate with statistical methods or theories alone. This should be mixed with fairness in politics. However, how to quantify the standard of fairness in different regions? In addition, how to develop an algorithm that makes it both fair in statistics and political science, and easy to operate in real life? If there is a conflict between the principles of political science and the principles of statistics/computer science at some point in the development of an algorithm, which one is preferred?

tonofshell commented 5 years ago

Both your papers focus on how gerrymandering works within the context of a two-party government. How might the results of gerrymandering differ in the case of a multi-party government? Do you think that a democratic system that promotes competition between multiple parties would result in greater equity when it comes to redistricting?

romanticmonkey commented 5 years ago

Thank you so much for giving us the talk. I am very impressed by your method to study such an instance of “seemingly unquantifiable” phenomenon as compactness. However, I am curious if you think your thinking model to compactness can also be applied to the study of other “Polanyi’s Paradoxes?” For example, recognizing a friend’s face. Generally speaking, in what way do you think we should approach explaining phenomenons “we know more than we can tell” so that we can operationalize the query successfully?

wanitchayap commented 5 years ago

Thank you in advance for your talk! I have a specific question for the method in Kaufman et al. (2019). I agree with the paper that full ranking is better than paired comparisons here. However, the full ranking here is only capped at n = 100. Do you think we can trust the full ranking by human when n is larger than 100? I agree that "heuristics and intuitions are strong enough" (12) in this case, but I suspect that it will be enough for larger n because of cognitive constraint. If you think that this is a valid problem, how would you go around it?

goldengua commented 5 years ago

Thank you for your inspiring papers. I enjoy the way you measure human perception of compactness by seperating the quantity of interests and measures in the Kaufman et al. (2019), and I like the idea of calculating the first principal component of the rankings for 100 districs. My concerns are similar to @wanitchayap 's:

  1. How can we demonstrate that n=100 is a reasonable number to choose?
  2. Can we treat it as a hyperparametre that we can adjust in the development set, and find the best n possible?
hesongrun commented 5 years ago

Thank you so much for the cool paper! It is truly inspiring to see the strong power of quantitative and computational social science. They provide us with bold and powerful tools to tackle the greatest challenges facing our society. My question is quite general: How do you pick your research direction and how do you identify the research area or the niche where computational social science can bring about significant improvement against the status quo? Are there any good principles for computational social scientists to follow? Thank you so much!

boyangqu commented 5 years ago

Thanks for the presentation! This research topic is really inspiring and innovative. It offers me a new insight in computational social science area. How do you see this research apply to real-word governments and how can it benefit people? And any possible future directions in both research and application?

nwrim commented 5 years ago

Thank you in advance for the great presentation and thank you for such an interesting read! Analogous to @romanticmonkey's comment, I really am impressed at the attempt to model and quantify something that is considered to be in the realm of "human intuition" (compactness, or "you know it when you see it").

I guess my question is a more specific instance of @boyangqu's question, since I want to ask how this research could be applied to real-world politics. Specifically, I think that in order for a method or statistical model to be applied to real-world politics (such as being incorporated into law or bill), it has to make sense to general population. However, as you mentioned in section 5 in Kaufman et al. (2019),

none of existing measures, and no measure we could find, offer a simple geometric representation for what humans know when they see. (p. 22)

I think this implies that the method described in the paper will be very hard to comprehend to general public. Indeed, I think most people (including me) would have a hard time comprehending

an ensemble of predictive methods (with these data), consisting of least squares, AdaBoosted decision trees, support vector machines, and random forests. (p. 15)

So my question will be: do you think seemingly convoluted models such as method mentioned in this paper, could be accepted by the general public and used in the real-world (such as your method to be used in the redistricting process)? Or alternatively, do you think this kind of model could be applied to the real-world without making people understand the mechanism underneath? Of course, I know that there are many technology that we use nowadays that we will never understand the underlying mechanism - I use GPS without knowing much about relativity. But somehow this feels different to me, since I have seen many cases where people do not accept or believe in, and sometimes offended by, attempt to quantify "human intuition".

Thank you again for coming to our workshop! I really look forward to the presentation.

ChivLiu commented 5 years ago

Thank you for such a wonderful presentation! I am currently working on a project using area medical treatment data to estimate the health level of people living in those areas. The modern statistical part is using regression to predict the coverage of medical service and people's average health condition. However, our group has received queries from some traditional scholars that they don't think the statistical model could explain the complexity of the social development of an area. Why would you firmly believe that modern stats models could predict the influence of some social factors such as poverty and racism? Or why would you consider the Machine Learning models reliable enough to understand human behaviors? Thanks again!

ydeng117 commented 5 years ago

Thank you for your presentation. These researches provide thrilling findings that prove the statistical model can further estimate more conceptual and complex ideas. How will this change the landscape of social science research? Does it mean that we would have a cheaper and possibly more accurate way to replace the qualitative method of studying people's thoughts? Moreover, to what extent could we extend the frontier of social study? Can big data and machine learning further make inferences about our subconsciousness?

hihowme commented 5 years ago

Thank you so much for your presentation in advance! Your research combining the computational methods and statistical model into the field of social science is truly significant in today's world. The statistical model and the graph could help many areas in the social science research. I have a general question, that is, what is the area of the quantitative methods in the social science research? Is the computational methods universal in all the discipline, or, there are some areas that would be biased/get into some ethics problem if we use the computational methods?

timqzhang commented 5 years ago

Thank you for your presentation. The topic of gerrymandering is a quite inspiring topic in politics. Here I have two questions:

  1. I believe that implementing the computational methods could be a powerful tool to extend the compactness in the U.S., but what about the similar notion in other political system? I know it will be many more steps further, but considering the great power of data, it will be more inspiring to come up with a more generalized model to explain multiple worlds.

  2. Other than the conception of compactness, what other notions do you think is still quite vague and need to get a more explicit definition? It is not necessarily to be a notion in politics, but any topic that may be tackled via computational methods.

lulululugagaga commented 5 years ago

Thanks for the presentation. We're so excited about your visit and sharing. Since Dr.Gary King is literally the first few researchers who put forward the concept of combining computational methods and social science together, I would post more general questions this time. Do you see difficulties, challenges or limitations in terms of people, research fields, and other resources for quantitative social science? How may we overcome such difficulties and develop computational social science into a broader subject?

mingtao-gao commented 5 years ago

Thank you in advance for the presentation Dr.King! The idea to use computational methods and statistic models to measure the objective judgement of humans on compactness is very fascinating. One interesting thing I read is that you concluded in the paper, the measure reflects the underlying viewpoint held about the concept of compactness by everyone from educated Americans to public officials, judges, and justices, which is great point to discuss. The final statistic model correlates best with judgments of a certain group of population (people who at least have basic knowledges in gerrymandering and legislative districts). From Figure. 6, the MTurk data do not correlate well with the rest of respondent groups.

Thus my question is as social science researchers, how can we make sure the model we generated can be representative enough for the general population, when handling objective human perceptions and judgements, to eliminate biases? Since nowadays, many algorithms are embedded with underlying biases that come from the group who generate the data from the beginning.

RuoyunTan commented 5 years ago

Thank you so much for the talk. Your research on measuring legislative district compactness is really inspiring to me. I am particularly interested in the methodology of research. For a qualitative social science question like this one, how do you build a framework that sets the assumptions and requirements for new measurement models?

For example, I find it interesting when you discuss how humans' sensitivity to rotation of images adds a requirement on your new model. As I was reading your paper, I tried to think about how I would approach this topic and design a model, and I found this "sensitivity to rotation" to be something that was difficult to think of. Among the countless features of humans' minds when it comes to processing images, I felt it was difficult to search through them and decide whether or how to link any of them with the model that we aimed to build.

So could you share some thoughts on how to build the research framework? And for students, could you give us some advice on this matter to help us better develop our social science research intuitions and skills?

luxin-tian commented 5 years ago

Thank you for your visiting and presentation. I am quite inspired by your works as it not only exploits the more and more powerful computational toolkits for social science research but also introduces me to a new field of research. I would like to ask an extensive question beyond political science: how can computational social sciences contribute to inclusive knowledge spreading for the general public? Is there a way in which computational methodologies can, while empowering academic research, bring social sciences out of academia and benefit everyone’s life?

ShuyanHuang commented 5 years ago

Thanks for presenting! In the Interpreting section of your 2019 paper, you listed several most predictive features towards compactness, which can be seen as the key factors people consider when judging compactness. I guess you find these features by calculating feature importance. And I am thinking if we can make a step further by estimating a global surrogate decision tree, so that we can mimic the procedure of people’s decision-making.

Leahjl commented 5 years ago

Thank you in advance for your visiting and the great presentation. The idea of quantifying "compactness" is quite enlightening! In the paper, you mentioned that the definition of compactness is defined by judgment of human observers, and supported by common cognition of most educated people. In section 3, you measured this concept by eliciting views of the compactness of specific districts from respondents. Is there any bias during your respondents selection?

since compactness in the law is, for all practical purposes, defined by the judgment of human observers — including redistricters, experts, consultants, lawyers, judges, public officials, and ordinary citizens — the claim of an objective standard, measured on a single dimension, can only be supported if most educated people evaluated a district’s compactness in the same way.

chiayunc commented 5 years ago

Thank you for presenting your wonderful work. I do have some questions from the aspects of the design of the government branch.

My question is, from a government branch point of view, who do you think is in the ideal place to provide the standard of compactness? Do you think the judicial branch is in an ideal actor to provide such standards in terms of check-and-balance and also counter-majoritarian diffculty?

If it was for the judiciary to determine the standards, doing so could be as much solving the problem of ambiguity as it is providing a road map for politicians to go around or manipulate it. Being clear is not always the best option for a court. In Vieth v. Jubelirer, Justice Scalia’s plurality opinion did point out that

Consider, for example, a legislature that draws district lines with no objectives in mind except compactness and respect for the lines of political subdivisions. Under that system, political groups that tend to cluster (as is the case with Democratic voters in cities) would be systematically affected by what might be called a ‘natural’ packing effect.

Do you think this goes to show that the judiciaries understand the complexity and the nature of the idea of compactness, but they are not in the ideal place to act?

It seems that we are in a predicament where, if not the judiciary but the legislators provides the standard, sure it has democratic legitimacy, but it may backfire as politician can do the districting base on their on interest.

Kaufman et al. (2019) concludes that there's an underlying understanding of compactness after all. What is the best way for the understanding to be implemented? is our government setup lacking the conditions to crystallize the meaning of compactness with the universal understanding? Since Davis v. Bandemer has established that partisan gerrymandering was justiciable, could "knowing when seeing" actually be an acceptable approach?

SoyBison commented 5 years ago

Thanks for coming to speak with us! My question has to do with this idea of establishing mathematical compactness. I'm wondering if maybe there could be some relationship between the statistical properties of the distance from the edge of a territory to the center-of-mass of a territory which may be informative. It's a fairly well documented phenomenon that people have an unexpected intuition for these sorts of things like symmetry and homoskedasticity, and perhaps this has some relationship to the compactness intuition. Thanks again!

YuxinNg commented 5 years ago

Thanks for the readings and really looking forward to the presentation tomorrow. I am really inspired by your work at quantifying human perception. I myself did the similar thing a year ago, which was to quantify people's expectation in stock market. I spent months struggling to find a "perfect" way of doing it. Though I finally found a method, I am not very satisfied with it. Also, as you mentioned in your papers, for this “compactness” problem, " academics have shown that compactness has multiple dimensions and have generated many conflicting measures". It seems that this kind of problem is hard and scholars tend to have different approaches to the same problem. So, my question would be 1. Is there an optimal approach for a question like this? 2. When facing this kind of quantifying human perception problem, is there any general method of cracking it? I guess the answer for both my questions would be no. In that case, I would like to know your experience on how to approach a problem like this. Thanks!

MegicLF commented 5 years ago

Thank you so much for your presentation! Your work inspires me on how to apply computational power in the area of social science. And like the statistical model you use to quantify the compactness of legislative districts, the computational models can have a great contribution to new understandings in social science. In this project, how do you balance “quantitative” and “qualitative” of the compactness? Also, what is important to be aware of when implementing statistic tools in social science? Thanks!

KenChenCompEcon commented 5 years ago

Thanks for the interesting presentation! As I learned from your paper, being able to develop such a miraculous predictor can help to deter gerrymandering. I am also curious about what kind of attributes can better define the internal compactness of a legislative district? Is it ideological coherence or socioeconomic homogeneity? And to extent should political inconsistency be included while allocating a legislative district?

yongfeilu commented 5 years ago

Thanks for the excellent presentation! It's really exciting to see how computational methodologies can be used to improve the fairness of political life. However, my question is how we ensure that if this system is built and applied in practice, it will not be manipulated by some powerful figures, considering that many other people cannot understand the mechanism underneath. Furthermore, do you have some ideas on how to protect users' privacy in this system? If such privacy issues are not dealt with properly, this system can become a tool that will exert horrible influence on participants' life.

ruixili commented 5 years ago

Thanks for the presentation! This research topic is really interesting and meaningful. It offers me a new insight into the computational social science studies. I wonder other than measure the seemingly unquantifiable "compactness", what's the other possible applications of this study? How can it benefit people?

bakerwho commented 5 years ago

Thanks for joining us at the workshop. As a quantitative social scientist, I'm immediately both intrigued by and skeptical of new ways of operationalizing soft concepts, and I did appreciate the rigor and methodology of your work.

I recently encountered the compelling idea that a measure is defeated the moment it is used as a standard to be improved upon. An excellent case in point is that of the GDP/GNP, a measure that should ideally be a crude approximation for the value of production, but that is now treated as a metric of success or failure in achieving production (well articulated by this Economist piece). The rhetoric develops into 'the economy is great because the GDP is up' as opposed to 'the GDP is up and that's probably better than it being down'.

Very generally speaking, do you see problems with metrics (including ones you yourself develop) being used to motivate, rather than measure; to game, rather than interpet?

anqi-hu commented 5 years ago

Thank you for sharing your thought provoking research methods with us and introducing us to the idea of developing quantitative measures of highly qualitative concepts to represent human perception. As much as I believe this method should be generalizable and applicable to many other cases as well as equally qualitative fields, it would be surprising if there weren't exceptions. Out of curiosity, what are some instances where the difficulty of such quantification of a theoretical concept is just insurmountable? In these cases, what approaches can data scientists take to work around this?

bjcliang-uchi commented 5 years ago

Thank you for the presentation. I find your research especially interesting: instead of trying to present an "indifferent" statistical aspect, you use algorithms to predict human perceptions, for example, how humans' decisions are not rotationally invariant.

I have two questions: 1) Has this algorithm of measuring district compactness been actually used in any political/ legislative context, or is there any plan of such application? 2) Currently, the compactness in your model is based purely on geographic features. However, in reality, even if we disregard considerations on "soft features" such as local culture and partisanship, some other "hard features" matter to our decision. Landscapes like forests and deserts as well as transportation, for example, might also affect people's intuition on compactness. Do you think the survey results would be different if you instead give respondents satellite maps of these districts rather than purely their shapes?

policyglot commented 5 years ago

Dr King, You touched on a key dimension of engagement levels for the human-in-the-loop for text-based studies.

Third, if it is possible for a survey respondent to rank (say) 20 districts without muchtrouble, then we can save considerable time by administering this one engaging survey task rather than having to ask 190 tedious paired comparisons for each respondent.

You later concede:

this engagement was unnecessary since it did not increase inter- or intracoder reliability

What trends do you foresee in designing mechanisms where engagement proves critical for reliability? For example, when non-experts independently tag images for malaria infection or help unravel the human genome's structure.

JuneZzj commented 5 years ago

Thanks for the interesting papers and talks. It is really inspiring that you provided an approach of understanding a relatively qualitative concept via geometric aspects. I am interested in how you connected your measurements with the question you are trying to solve. In addition, you spend a few paragraph talking about the method you used. Instead of using paired comparison, you introduced a new method, which is ranking all at once. What other research areas do you think this new method may be applied? What implication of the good performance of this model? Thank you!

liu431 commented 5 years ago

Thank you for the talk. The current method for predicting is an ensemble of least-squares, AdaBoosted decision trees, support vector machines, and random forests. Why and how did you decide to use these models? Will neural networks be more predictive in this case?

WMhYang commented 5 years ago

Thank you very much for your presentation in advance. I really enjoyed reading your work as it is of great inspiration and fun. In Kaufman et al. (2019), you have successfully created a statistical model that could predict the degree of compactness of districts from geometric dimension. My question is as follow:

  1. how successful we could be so far if we use your single-dimension predictions in the process of redistricting to prevent gerrymaking?

  2. Is there a possibility that we could make use of data visualization to incorporate more dimensions into the predictions? For example, we could visulize the population density of a district by drawing a scatterplot inside the shape of that district. If so, will it help improve your model?

linghui-wu commented 5 years ago

Thank you in advance! I enjoy reading these papers and they are enlightening. As @timqzhang has mentioned, apart from compactness, there are still many conceptions that need to be defined. My question is a general one and I would like to know when we come up with a new definition with our own words, how can we persuade the readers to somewhat accept it and how to make sure they perceive what we exactly indent to convey? Looking forward to the presentation tomorrow!

ziwnchen commented 5 years ago

Thanks for the presentation! I am not familiar with the concept of compactness. But it seems most literature, including the one you gave us, measure compactness purely from district shape alone. Specifically, your paper also employs an interesting method that explores human perception on the geometric shapes of legislative districts. As for this kind of measure, I have the following question:

You mentioned that the district shape is converted from raster to vector. Which scale is the map you use? An important characteristic of the map scale is that the larger the scale, the more details (i.e., corners)---like fractal geometry. Therefore, different map scales have different geometric features, which may influence human perception. Do you think this might be a potential problem with the compactness measurement? In addition, is the choice of the map display size another factor that affects the human perception of geometric shapes?

HaowenShang commented 5 years ago

Really interesting topic! Thanks for your presentation! For the paper talking about district compactness, you discussed an approach for ‘measuring an ill-defined concept that you know only when you see’ and ‘defining the concept of interest separately from the measure used to estimate it’. That’s really an interesting and innovative approach. But for the ill-defined concepts, could you give us more examples and how this approach used to measuring other concepts rather than compactness. And how to improve that approach?

PAHADRIANUS commented 5 years ago

Thank you in advance for being with us and discussing the Gerrymandering problem in a modern perspective. I am absolutely impressed by the whole process where you and your colleagues devised mathematical models for representing regional partisan fairness, and then formulated a standardized measuring system for district compactness using a combination of various statistical toolkits including the basic least squares as well as decision trees and random forests. The system synthesize the many different characteristics of the voting district that can be interpreted for its compactness, using only its geometric feature, into a singular compactness index, making the comparison of different districts much more straightforward. And the simulating results of the measuring system are also strikingly close to the actual ranking political professionals make in the real world. Such result well demonstrates your point that the measure "reflects the underlying viewpoint held about the concept of compactness by everyone from educated Americans to public officials, judges, and justices". Essentially the model captures every key aspect these intellectuals care about, and process these information a a expedite manner far more speedy than humans.

While the method itself is great, I am a bit baffled by to how much degree it can be widely used to realistically benefit the political institutions. Your also mentioned that other much simpler methods that also use solely the geometric information can produce, though not as good, similar levels of accurate predictions. So what are the chief advantages of this compactness measure that can convince the political institutions to adopt it and put it in practice when reshaping districts?

dongchengecon commented 5 years ago

Thanks a lot for the presentation! This is quite an interesting paper about the application of machine learning approaches in the measurement of "compactness". The very reason you argue in the paper that this measurement outperforms the others is based on the criteria: "what we know when we see". What if this rule of thumb has been changed to some mathematical definitions? Could you prove in some way that the measurement based on human survey could perform at least equally well as the other measurements?

harryx113 commented 5 years ago

Thank you so much for coming Dr. King! My question is not limited to the papers but is more regarding to your experience as a pioneer in quantitative social science.

You have tremendous achievements in using analytical tools on unprecedentedly large data set to produce social good. On the one hand, your research in academic setting has added great value to the public sectors. For instance, your "partisan symmetry" and "ecological inference" methods were accepted as standards to detect gerrymandering. Your contributions to Mexico's universal health insurance program and Social Security Trust Fund were also so significant. On the other hand, your entrepreneurial experience has created impact in the private sector. How did you balance the academic and business lifestyles? How would you evaluate the role of data science technologies in bringing academia, government and industry closer together?

tianyueniu commented 5 years ago

Thank you for these interesting readings! To be able to quantify “what we know when we see" was truly inspirational. It is exciting yet intimidating to see the power of algorithms and machine learning models in interpreting data in social science. I wish to ask two broad questions related to this: 1) How can computational social scientists promote and apply their findings outside of academia to empower the general public, and 2) what are your perspectives on preventing data misuse (for example, given your earlier suggestion for a system that would allow researchers to access and analyze facebook data, is there a way to prevent Cambridge Analytica from happening again?)

SiyuanPengMike commented 5 years ago

Thanks a lot for your interesting and informative paper. In your paper, you tried to use ML models to quantify a rather vague thing -- 'compactness'. I'm just curious about the universality of your methods which have been developed in your paper. Could these methods be used in other areas that also have similar ill-defined concepts? Or, in other words, can these methods be used in more broad applications and benefit the quantify of different things?

nswxin commented 5 years ago

Thank you for sharing your advanced researching method! I am very impressed by your advanced research method and I've learned a lot from it! Do you think it's possible to apply a similar statistical model to improve other aspects of people's lives? For example, can we create a similar model that predicts from the geometric features of the district and factors like school evaluations that later enhance the education equality across the nation?

chun-hu commented 5 years ago

Very interesting paper! I'm impressed by your use of statistical methods and machine learning models to quantify a hard-to-define concept. Moreover, the concept is closely related to how we perceive things and objects in the world. I'm wondering if, in the future, such methods can be applied to psychology/neuroscience research to help explain how we perceive and identify abstract concepts. It would be an exciting collaboration!

di-Tong commented 5 years ago

Thank you for sharing your interesting projects! It is amazing to utilize computational method to measure very complex and multidimensional concepts. Could you share with us more examples like this in the realm of political science? What are the other practices or opportunities to build a measurement with novel methods like that in your measuring of compactness?

huanye commented 5 years ago

One point or principle mentioned in Katz et al.(2019) is to separate the estimator and the quantity of interest being estimate, my understanding is that in this way the same quantity of interest can have several measures corresponding to several estimators. We can then check if those estimators are statistically biased or not, and measures corresponding to those biased estimators are considered at least unreliable so we only keep those unbiased measures. But what if we get conflicting conclusions by using more than one unbiased measures? Which unbiased measure and the consequential conclusion should we take? Or could it be in some cases, even the biased measure can lead us to a conclusion closer to the truth?

vinsonyz commented 5 years ago

Thank you so much for your presentation, professor King! With the development of geographic information system and the popularization of the PCs, we can have more understanding on the geographic compactness. What is the future topic in political science that can be studied based on the computational methods?

cytwill commented 5 years ago

Thank you for this presentation! The approach you used to conceptualize and measure vague ideas like "compactness" is attractive. I think in computational social science, there are many occasions where we need to convert such descriptive ideas into quantitative measurements. Here, my questions are from two aspects:

  1. In this research, compactness is measured with human perception. It seems that you only use the geometric shapes to respondents to judge on, have you explored the possibility of other indicators like the area or span of these districts.
  2. The results of the model are mentioned as continuous values, so how do you decide whether a certain district is compact or not? (A threshold or criteria to base on)
sunying2018 commented 5 years ago

Thanks for your presentation. It is very impressive to see the new measure to the complicated multidimensional concept - compactness in your paper. You mentioned in the paper that calculated a set of geometric features and you explained them in the Appendix A, such as SIDES, REOCK, GROFMAN, etc. Could you please talk more about how you select these features among a lot of geometric features?

yutianlai commented 5 years ago

Thanks for coming, Dr.King!I really appreciate the accurate and effective computational methods you take for social science research purpose. Now in computational social science research, some researchers would directly adopt the fanciest model or computational method no matter what the research question is, while some others stick to traditional social science methodology and don't trust computational methods at all. How do you confront such dilemma to make computational methods serve your research purpose to the largest degree?

heathercchen commented 5 years ago

Thank you for your presentation! Your excellent papers using computational methods and big data really gives me a lot of insight into the latest topics in political science. As you see, this year's Nobel prize in Economics honors Duflo and Banerjee for their contribution in using RCT for development economics. I am quite curious about your opinion on the trend in social sciences nowadays. Social scientists seem to rely more and more on experiments, data and mathematical analysis, while somehow focus less on pure speculation and reasoning. What is your comment on that?