Open shevajia opened 2 years ago
Hi Professor Soll,
Thank you for coming to our workshop and share your work with us. I was generally surprised by the "convergent effect" of improvement over an average Judge when adding more judges in both the simulated and experimental environments.
Hi Professor Soll,
Thank you for your presentation. I was wondering how we should go about selecting the individuals for our select groups in real life? How can one maximise the information gained from the 5 or more experts as part of your robust small-group strategy?
Secondly, what do you think about the "wisdom of the crowds" when dealing with extremely unlikely tail-risk events? For example, predicting the 2008 housing market crash or the use of nuclear weapons during armed conflicts. In this case, entire crowds can be very wrong about what's about to happen. What should the strategy be when dealing with these high uncertainty environments.
Hi Professor Soll,
Thanks for your presentation. In your paper, you mentioned clusters of judges and you proposed to avoid to select multiple judges in the same cluster. I'm wondering how to balance the trade off between capability and diversity. For example, if one cluster contains all top-notch judges, should we sacrifice this cluster (only choose few judges from it) for diversity?
Hi Professor Soll,
Thanks for coming to our workshop! This is a very interesting topic and reminds me of the decision system Bridgewaters use. I have one simple: For the result of this research to be applied in real life, how must one "expert" advertise themselves, or in the papers' words, provide cues to enhance their credibility? Thanks!
Hi Professor, Thanks for the presentation. My question is, you use the example of clusters of judges to suggest avoiding selecting multiple judges in the same cluster in your paper. I wonder your opinions on how to balance the trade-off between ability and variety. Thanks a lot!
Hi Professor Soll,
Thanks for sharing such interesting paper with us. I realized that you used the phrase "wisdom of crowds" to describe the convergenced result. However, we also notice that a lot of times there is the emergence of group irrationality from rational individuals -- under what condition is the convergence better than the average of indivial performance? Thanks.
P.S. I'm really curious about the survey we took before the lecture!
Thank you for sharing this work with our group! My question is only topically related, but the idea of applying a wisdom-of-crowds type logic in machine learning came up in some of my recent work and I'm curious to get your opinion. So we know that the essence of machine learning is data quality - specifically, whether the training set your model "learns" from is adequate for generalization to unseen data. Often our training data come from human raters and are all aggregated into a monolithic dataset which is then used for training and validating a single learning model - do you think there may be some unique wisdom-of-crowds type benefits to separating out individual human rater judgements, training models to accurately capture the idiosyncrasies of each rater, and then combining those models' outputs in some principled manner? Or is it more the case that any such wisdom of crowds type benefit in this context is already reaped from the aggregation of those raters' ratings in the first place? This question is a little underspecified, but in general I'm curious about how the concept of crowd wisdom intersects with ML model training, which is often reliant on data best described as "crowdsourced" (but perhaps not "wise"?)
Hi Professor Soll, thanks for bringing such interesting work! It's so impressive to see that decision-makers can improve the accuracy of their judgments by consulting “crowds” of others for their respective opinions. I guess my question is how we can better improve the wisdom of crowds maybe by selecting the "crowds"? Can we eliminate poorly performing individuals from the crowd and in this way further increase the accuracy of collective judgment?
Thank you Professor Soll for sharing your work. I am really impressed by your combination of both the simulation and a larger real-world dataset in the first section demonstrating the robustness of the select-crowd strategy, and how their results so impeccably matched one another. My question though is that whether the "selected crowd" would benefit from the people with relatively low level of expertise. As you mentioned, the select-crowd strategy is naturally attractive to people and in reality I believe that's exactly why there's a committee of economic advisors than a single master brain in the Whitehouse, collective decision making in most business prevailing over dictating entrepreneurs, and on a larger scale of political history even the most enlightened and capable absolute monarchs giving way to representative democracy. Clearly, representative democracy is operating using the select-crowd strategy and has been a pretty good method of making the selections. But in the recent years rise of populist politicians suggests that perhaps the crowds are more inclined to trust their own wisdoms. How should we reconcile the selection method to the needs of the populace?
Thank you for sharing your work with us! In your paper you divide environments into four categories - low/high dispersion of expertise and high/low bracketing. In this section you say that the whole crowd strategy tends to work best when there is low dispersion and high bracketing. While reading, I was considering about a potential third axis - difficulty in determining expertise. Have you considered rerunning your experiments while varying the ability to identify expertise? I assume that it would result in the whole crowd strategy becoming the most effective, but I would be interested in seeing how the select crowd method stacks up against it, given how successful that strategy tends to be.
Hi Professor Soll, thank you for sharing your work. In different industries, the pace of change and disruption may vary significantly (e.g. finance, tech vs utility, legal sector). Can the optimal decision making strategy (individual judgement vs larger crowd) be different depending on the pace of change of an industry?
Hello Professor Soll,
It's really exciting to hear from such a distinguished speaker. This is a fairly recent finding, but attempts to predict the future are something that everyone is invested in, ranging from stock markets, to prediction markets. Have the results of this study been implemented in applied fields? What might most benefit from using this method?
Thanks for sharing your work with us. This article attempts to strike a balance between the unstable individual expert decision and the potentially mediocre choice of the average population. The paper suggests that 5 might be an appropriate number to use, but in the real world, what principles should be considered when selecting a number?
Dear Professor Soll! Thank you so much! It's indeed very difficult to make decisions in real word and your research has provided excellent guidance! I am wondering whether decisions should be changed based on different real-world contexts. For example, evaluating the price of the grocery V.S. financial decisions of firms.
Hi Professor Soll, thank you for sharing your interesting and attractive work. That’s really an insightful idea. But I just a little doubt whether in all decision contexts, expert opinions perform worse than the “wisdom of crowd”. In the field of very specific knowledge, for example, when predicting a stock performance, or estimating the patience’s health situation, by intuition, people who have better knowledge must perform better than those with no related knowledge. In this case, it seems that the wisdom of crowd wouldn’t overperform experts. Therefore, I wonder whether there are some pre-requisites for the superiority of “wisdom of crowd”.
Hi Professor Soll, Thank you very much for sharing your work with us! It seems that on the internet, people doubt “experts” more. For example, on Sina Weibo, under posts titled “experts said….”, the users always commented that “Please experts stop giving any suggestions”, “I never trust the experts”. But people are often more likely to trust the information provided by common users. What do you think about the dilemmas between making accurate predictions and being doubted for representing some other entities’ benefits? Another question is about people’s belief in experts’ prediction and the pursue of democracy. On the one hand, people believe that select crowds can make wiser decisions. However, in a democratic society, common people are always granted rights to select their leaders or to issue a sentence for a crime, regardless of their knowledge background or ability to make judgement. In this sense, how can people be confident that they are making the right decisions? Thank you very much!
Hi Professor, thank you so much for sharing this fascinating work! My question is about the empirical testing methods. As the topic is deeply related to the real-world setting, I was wondering about the pros and cons of computational simulations compared with experiments. Thanks!
Thank you Professor Soll for sharing your expertise with us. I was wondering whether there is a push for "wisdom of crowds" from different age groups? I wonder about whether there are topics that perhaps children might be more knowledgeable on certain topics that haven't been explored yet, or maybe they have been explored?
Thank you for this exciting work! In the research you mainly look at the selection of crowds in the more-social science/ social-economic decision-making settings (i.e., economic, financial decisions), I'm wondering who would it change in more scientific questions, such as physical or mathematics?
Thank you Professor Soll for presenting highly interesting research with us. I was wondering if taking weight on individual opinions instead of taking selected opinions could work well as much as select crowd method. In many econometrics methods, we often use weighting method instead of purely taking average. I think among top k% there should exist variance of accuracy. Then, using past data, we could find the optimal weights and see how well weighted opinions predict objects. Would this method be efficient? Or would this be too costly to implement?
Thanks for sharing your research! I am curious about how your conclusions reconcile with rational ignorance - experts may be incentivised to acquire knowledge that the public would rationally ignore - do we always expect crowd wisdom to fail in such cases? Thanks!
Thank you for sharing your interesting research! It is interesting to see the high performance of select crowd in the range of four to eight judges, as evidenced via your multiple charts. While the quantitative results are convincing, I am curious if you have any insights as to why this might be the "optimal" range of judges intuitively. Thank you very much!
Hi Professor, thank you for coming and great research topics, I can see parallels between your work and Dan Sperbers. My question is what kind of adaptations make us predisposed to prefer the single expert over the general crowd? In evolutionary terms, this seems counterintuitive since averaging tends to do better in many situations but it doesn't come obvious to us. Do some societies prefer group averaging over the single expert or is there a near-universal of preferring the expert?
Hi Prof Soll, thank you for sharing this interesting research! I am curious that how the performance of "wisdom of select crowds" will vary across different contexts and types of decisions? More specifically, when should the strategy be applied? Thank you
Hello Professor Soll, thank you so much for your time today. I wonder if the result will be varied or not in long-expert environment. Will you think it's necessary to develop some experimental or observational studies for future development?
I'm very interested in hear more about what you discuss in your Extensions section: the application of these ideas to categorical outcomes. For example, I'd imagine testing how well a select-crowd strategy would perform for evaluating whether or not a hospital patient ends up benefiting from a rarer/less-common or otherwise non-mandatory procedure, and then extending the comparison to also include the predictions of machine-learning models as Mullainathan and his colleagues have advocated (and presented to us at this workshop). I also noticed you mentioned Metacritic and Rotten Tomatoes in your section on how to extend this work to subjective judgments; until reading about your broader typology of strategies, I would have said they are opposites, in that those two services each choose a very different way of aggregating critics' judgment (Metacritic by making a composite score and Rotten Tomatoes by dichotomizing). Now I'm imagining a Meta-Metacritic that is like Metacritic but only for the five best critics, which we could operationalize as the five critics most historically appealing from the point of view of a certain type of consumer. Because in that case the new service would have a few different sets of five, we'd need a way for consumers to choose which set of five best matches their own TV show or movie preferences...maybe collaborative filtering. :)
Hi Professor Soll,
Thank you for sharing your research with us. I'm curious if competing perceptions of expertise and experts will affect the mechanisms of how the top 5 experts are considered.
Thank you Prof. Soll for sharing your work with us!
One of the conditions in the wisdom of the crowd is that people make their estimates independently, without peer pressure or anchor effects. How common do you think this condition holds in real-world settings? How much worse would select group be if this condition is violated?
Hi Professor Soll, thank you for sharing your work with us. I am impressed by your research topic and design in the paper. I am wondering how would eliminate the bias in experiment of the best member strategy. What are the criteria for selecting the best-member within a field?
Thank you for coming to the workshop, Professor Soll. I'm excited for the talk. I think this is research is really interesting, and I appreciate the simple, memorable takeaway: "take the top five." I'm sure I wasn't the only one who had academic alarm bells going off when I read that, but I do think it's those kinds of messages that have the potential to make the most impact on real-world decision-making. I'm wondering whether there might be any extensions to situations where there is no way to average between different opinions. In these cases, the time and effort it takes to get from a group of opinions to a unified strategy seems likely to increase with the number of decision-makers. A single leader can be very efficient, while a large group can be very slow. Intuitively, it seems like a middle-sized group would be best. Do you see any opportunities to study these more complex situations with simulations?
Dear Prof. Soll, thank you very much for sharing us with your work in this week's workshop! I have a question regarding the simulation part: in your simulation, you used two levels of dispersion and two levels of bracketing to generate the four diagrams. I wonder that can we still further the implementations of levels of dispersions and bracketing, and how will the results change according to this modification. Are the results still robust?
Thank you for presenting, Prof. Soll. Does the finding that the opinions of a group of average persons are often as good as that of a single expert depend on the domain in question? What aspects of the domain most influence this relationship?
Hi Professor Soll! Thanks for coming to share your impressive work with us! In your paper, you mention averaging the opinions of the top judges and the ranks of judges are based a cue to ability. I just want to know more about how you measure the ability? What do you use besides the accuracy of several recent judgments you mentioned? In addition, if you manually choose the different settings to test the strategy, will the choice lead to bias?
Hi Professor Soll! Thanks for sharing this great piece of work with us! It is exciting to read that it can be more robust and accurate to employ a select-cluster strategy rather than averaging the collective wisdom or relying on single expertise. Out of curiosity, I wonder this select-crowd approach could potentially introduce bias if the selected individuals are not giving out their opinions independently(because I hypothesize the decision-making process are not synchronous and thus can be sequential and dependent) (i.e. information cascades might form and could potentially go wrong in reality?) Thanks!
Thank you for presenting your paper, Prof. Soll! I am curious about how to model biases and limitations of decision makers into the effects of different strategies? Looking forward to your talk!
Thank you for presenting. I noticed you described the convergent outcome with the phrase "wisdom of crowds." However, my issue is if the "chosen crowd" would benefit from those with a low degree of competence. Can the best decision-making technique (individual judgment vs. broader crowd) fluctuate based on an industry's rate of change?
Thanks for sharing! I wonder if your works can be related to computational learning theory from computer science. Like are there iterative algorithms that minimize some notions of regrets based on the inputs from the crowd?
Dear professor Jack Soll, Thank you so much for your presentation. Your new idea of measurement is really interesting. I was wondering what would be potential applications of select-crowd strategy? also, what would be implication behind people preference towards information from crowd? Do you think any educational, historical, and cultural backgrounds/factors influencing people's preference? Thank you so much again
Hi Professor Soll, thanks for sharing your work with us! Do you think the phenomenon you describe in your paper can shed some light on representative democracy? Does your theory to some degree proves the efficiency of representative democracy as opposed to direct democracy? Thanks!
Thank you so much for sharing your work! I am wondering do you think there is any bias or mislead in your measurements and selection process as some of the settings are manually chosen and in the end you are only selecting the top five? Would you consider the group decision to be better than individual's? Is there any other criteria you used other than the several judgments mentioned?
Looking forward to the talk!
Hello Professor, Thank you for the presentation. Do you think different backgrounds may give different optimal crowd representation? Or if the decision is in different fields, would it affect the choice of the crowd in a systematic way. Looking forward to the presentation
Dear Professor Soll, thank you for sharing your ideas with us! I really enjoy the ideas of the wisdom of select crowds. I just have a quick question about the further design of the mechanism of choosing the experts and the selected crowds. The most ideal situation that I can come up with might be that we could add the dimensions of how to classify a problem and then choose the most feasible strategies for each. I am wondering whether it is possible to build a classification mechanism depending on more specific types of questions? And what are the new attributes you have in mind, for now, to apply the strategy choosing more generally?
Hello Professor Soll and thank you for sharing such interesting research. Your method of determining appropriate judges seems like an efficient way to ensure a high level of accuracy when performing manual coding tasks or other tedious measures in academic research. Do you envision that this method will have a broad level of applications in many research fields?
Hi Professor Soll, thank you for sharing your presentation with us. I wonder if when averaging the opinions of experts, could there be weights on each expert depending on seniority, or previous outcomes? Would this be an accurate representation of behavior or do the weights don't matter when looking at the aggregate averaging process of the whole population? And if so, would this put certain agents at a disadvantage?
Hi Professor Soll, thank you for sharing your work with us! You mentioned that in the environment with high dispersion, a more plausible strategy is to adopt the opinion of whom can predict the future best. I wonder if we lack historical data of prediction tasks, how should we make the choice? And how do you define the environment with high/low dispersion?
Dear Professor,
Thanks for coming to present at our seminar, my question is influenced by experiment 2 in your paper where you look at how people chose strategies to listen to different economists about bond market forecasts. I'm interested in what this means for the economic theories in Finance about how information is aggregated in the stock market. If different investors persistently follow different strategies for collating available information, how does this affect the information that is conveyed in the stock market and how does this feedback into people choice of who they listen to as a feedback mechanism?
Many thanks!
Hi Professor Soll, thank you for sharing your work with us. I really appreciate your simulation setup, which is neat and clear, also of good generalizability. I was wondering what is your opinion towards 'wisdom of crowd' under conditions where there are limited or noisy information, e.g., in a society where people only learn from their own network, or there are mis/disinformation. Thanks.
Hello Prof. Soll, thank you for sharing your interesting works! In terms of group wisdom, I'm wondering it it can be related to explanation from statistics, for example, the Law of Large Numbers. Imagine that people's attitudes or opinions are sampled from a noisy distribution, if there are more observations, the 'average' attitude or opinion will be closer to the real or optimal state. Does this share an intuition with the emergence of group wisdom?
Dr. Soll,
Thank you for sharing your work with us! I am curious - the argument you present for the utilization of select crowds seems to closely mimic our understanding of sampling practices (higher sample sizes may give better results than low sizes, but selective sampling can increase power even further). Do you think this select-crowd strategy can be successfully employed in sampling theory to increase the statistical power of research studies?
Hi Prof. Soll. Thank you for presenting your work. Can you elaborate a little more on the implementation of the select-crowd strategy in the real world? I see in one of the experiments, the select-crowd condition covers a pretty large range (2 -10 out of 11 economists). When should we stick with the top-5 principle and when should we extend the range? Thanks.
Comment below with a well-developed question or comment about the reading for this week's workshop. These are individual questions and comments.
Please post your question by Wednesday 11:59 PM, and upvote at least three of your peers' comments on Thursday prior to the workshop. You need to use 'thumbs-up' for your reactions to count towards 'top comments,' but you can use other emojis on top of the thumbs up.