uchicago-computation-workshop / Fall2022

Repository for the Fall 2022 Computational Social Science Workshop
3 stars 2 forks source link

10/27/22: Hoda Heidari #1

Open GabeNicholson opened 1 year ago

GabeNicholson commented 1 year ago

Comment below with a well-developed question or comment about the reading for this week's workshop.

If you would really like to ask your question in person, please place two exclamation points before your question to signal that you really want to ask it.

Please post your question by Tuesday 11:59 PM, We will also ask you all to upvote questions that you think were particularly good. There may be prizes for top question askers.

sdbaier commented 1 year ago

Dear Professor Heidari, thank you for sharing your work on hybrid human-ML decision-making systems. Two questions emerged when reading your paper.

(1) Following Proposition 1, does human-ML complementarity imply that the optimal joint decision strictly, and always outperforms both individual policies?

Put differently, are there boundary conditions in the decision-making process where the synthesis of multiple agents (1) performs only equally well as each of the decision-makers alone, and (2) where the synthesis of the multiple agents leads to a worse decision? If such boundary conditions exist, is there a practical decision heuristic to test if the joint decision will outperform the individual ones?

(2) On a related note, how does the optimization-based framework handle divergent conclusions within a single agent – i.e., a dilemma along internal processing of inputs, and subsequent multiplicity of inconsistent outputs?

Maybe I am not fully grasping the complementarity analysis in section 4. From my understanding, diverging conclusions across agents (i.e., disagreement in output) get resolved through weighting different outputs according to their agent. For the case where an agent is torn between multiple alternatives, and when conclusions are conditional, how are decisions ultimately made? As a simplistic example, imagine a binary decision between output 0 and 1. If H decides 0 and M decides 1, the final output of the model depends on the relative weighting of H and M. If H is torn between the two outcomes (i.e., 0 if condition C is met, 1 if C is not met), and/or M is unable to produce a singular output, how would the assessment of outputs and final decision look like?

fiofiofiona commented 1 year ago

Dear Professor Heidari, thank you for sharing your research on the human-ML complementarity framework. I found it especially interesting that the contribution from human versus ML is largely affected by the consistency of human decisions and machine's access to features.

However, I am curious whether a consistently larger weight on machine over human would eventually exclude human from certain decision making tasks. For example, with the fast enhancement of image classification techniques, machine may one day outperform human (who may make more inconsistent decisions) and thus gradually obtain larger weights, suggesting decreased contribution of human decisions. Do you foresee this as a possible future direction of human-ML complementarity in some areas?

Also, with limited knowledge of hybrid human-ML system literature, I wonder how the human policies were specified and determined, given that the internal processing models may differ a lot across human decision makers. How did prior models account for the variance in inferences and heuristics that human would use?

taizeyu commented 1 year ago

Dear Professor Heidari I would like to ask whether this high-quality decision-making of human-ML complementarity can be applied to the real world, or can it be achieved at a theoretical level. If so, where can it be applied?

adamvvu commented 1 year ago

Professor Heidari,

I found the discussion in the paper of some of the differences in human cognition vs ML systems particularly interesting. Clearly, each has their advantages and disadvantages.

While the framework focuses mainly on a joint policy obtained from a weighted average of the human and ML decisions, what are your thoughts on a joint policy formed from a composition of the two agents? i.e.

$$ \pi(\mathbf{x}) = \pi_M(s_M(\mathbf{x}, \pi_H(s_H(\mathbf{x}))) $$

In other words, a joint policy obtained by augmenting the ML model's feature space with the human decision maker's estimates. My thinking is that such a policy would allow us to exploit the advantages of the human decision-maker (e.g. expertise, heuristics, qualitative data) along with the mathematical precision of the ML decision-maker through optimization (e.g. consistency, universality).

borlasekn commented 1 year ago

Thank you for sharing your time and research with us, Prof. Heidari. In the paper, you considered instances where a third party who was independent would determine a join decision by joining the human and ML predictions. You noted some domains where this is feasible because those domains have human and ML decisions that are both credible. I was wondering if there were domains in which this framework ought never be applied? I'm sure in many domains the application would be determined reasonable on a case-by-case basis, but I wasn't sure whether in some domains this sort of combination would be completely infeasible? Thanks!

Hongkai040 commented 1 year ago

Professor Heidari,

Thank you for sharing your work!

I have a question with regard to the inconsistency between human and ML systems. My take away of the paper is that the proposed framework uses this inconsistency to enhance the performance of Human-AI ensembles. However, many AI systems are aimed at behaving like human. I am wondering it it possible to use the same framework to guide designing of such systems if we change the objective function to consistency between human and models? If not possible, what kinds of challenges would you expect?

secorey commented 1 year ago

Dear Dr. Heidari,

Thank you for coming to present this paper. Though ML models have been showing increased accuracy in decision making in more and more complex situations, I can imagine that there is a general distrust from the public in their abilities, especially when the stakes for decisions are high (e.g., medical diagnoses or bail decisions). Without a doubt, human-ML complementarity should help to alleviate these worries, but probably not take them away completely. In your experience, have you seen public distrust act as a hurdle to the development of this field?

Ry-Wu commented 1 year ago

Hi Dr. Heidari,

Thank you for sharing your research with us! In your research, you said you narrowed down the scope of inquiry to static environments. I'm wondering if this framework can be applied to more dynamic environments? If not, what else should be taken into consideration?

hsinkengling commented 1 year ago

Thank you Dr. Heidari for sharing your work with us.

In the paper, you mentioned that the application domains for this model can range from crowdsourced image classification to clinical radiology. While the framework can certain be applied to these contexts, would it make a difference that one is based on mass collaboration, while the other is based on small data, expert deliberation?

erweinstein commented 1 year ago

Hi Professor Heidari,

You focus only on situations where the relevant decision is a prediction, and I should add that you and your co-authors are very clear about this and the limitations it implies. So what would this framework look like for non-prediction tasks, e.g., recommendation, or for even-less-straightforward types of decisions? Since this is a known limitation, are you or any of your colleagues working on that, and if so, can you point us to some related work? Thanks!

zihua-uc commented 1 year ago

Hi Prof. Heidary,

One difference between human and ML in predictive tasks that you noted: humans have rich experience amassed over the years across many different domains, while ML systems are often trained with a large number of observations for a specific task. Are there any developments in training ML systems across multiple domains (emulating the human experience)? If not, is there any value in doing so?

Yuxin-Ji commented 1 year ago

Dear Dr. Heidari, thank you for sharing this paper about hybrid Human-ML complementarity model. I found the work enlightening in its attempt to build a higher-level, unifying framework that could make current (and future) models more comparable, and increase our understanding towards the area overall (since there are existing models following the proposed framework, suggesting certain logic already emerged behind human-ML model designing, yet unspoken).

I am curious about the implementation of this framework into an applicable model. While the taxonomy and aggregation mechanisms make sense and are ideal, it is not easy to obtain data for it. Particularly, evaluating the strength and weakness of Human and ML decision-makings to make use of the strengths of both is extremely smart but also hard to measure. I wonder are there standardized methods to measure these strength and weakness, deciding the cut-off points, and evaluating the effectiveness/accuracy of these measurements?

linhui1020 commented 1 year ago

Prof. Heidari, thanks for sharing your work! Will the ML decision-making complement decision-making by human in situation where the decision is associated with high risk and may cause severe outcomes?

yujing-syj commented 1 year ago

Hi Professor Heidari, thanks so much for sharing this amazing paper. My question is about the application and actual use case of the hybrid human-ML decision-making systems. What do you think of the industry and particular kind of tasks that the hybrid human-ML decision-making systems will be common used in the future? How to promote its usage in real life?

bermanm commented 1 year ago

Prof. Heidari, I thought this was a very interesting paper. Last year we saw a talk from Prof. Jack Soll about the wisdom of the expert crowds. I was wondering if you had thought about multi-human and multi-ML decision-making systems? For example, multi-human expert systems tend to outperform many novices and singular experts. Is there a way to merge the widsom of the crowds with ML, and could there be some advantage to combining different ML systems with different assumptions together with multiple humans? It could get complicated fast, but I am curious.

iefis commented 1 year ago

Hi Dr Heidari, thanks for sharing in advance! The taxonomy provides a very conducive framework in evaluating the sources of possible Human-AI complementarity. I am wondering if you could provide other concrete examples of complementarity analysis that weigh multiple sources of complementarity, and how, in these cases, the within-instance vs across-instance complementarity may be explained. Specifically, I am curious about how we can rigorously analyse cases where complementarity is led by the different internal processing between human and AI, as it seems to be a lot more difficult to quantify than the consistency example provided in the article.

yhchou0904 commented 1 year ago

Thank you Professor Heidari for sharing your ideas with us. The goal of collaboration between humans and machines must be to improve the whole social welfare. By defining the pros and cons of human and machine decision-making, we could get an expectation of how complementary the decision-making process is. I am wondering if there are some intuitive guidelines for people to construct a proper task or situation that could maximize the advantage of a hybrid Human-ML system.

jinyz1220 commented 1 year ago

Hi Professor Heidari, I am so grateful to you for presenting such enlightening work and I am looking forward to meeting you in person this Thursday! For the paper, I'm specifically interested in the section where you discussed the optimal aggregation mechanism, which is able to accommodate with various sources of complementarity. The best-fitting models for decision makers account for inconsistency in human decisions and target label bias for machines. My concern would be that, although theoretically including the consideration of those two "errors" for human and machine decision respectively in the models generates better prediction, practically speaking it is very complicated to detect and quantify those two errors especially inconsistency in human decisions. The prerequisite of measuring the inconsistency in human decisions is to have a sufficient amount of prior decisions human has made for a specific case. In reality, however, there might not have enough prior data to detect and determine the inconsistency. How would you like to address the potential limitation in the practicality of the models? Thank you!

zhiyun0707 commented 1 year ago

Hi Professor Heidari, thank you for sharing your work with us! The section on Human vs ML strengths and weakness is predictive decision making is interesting since it provides the trade-off between humans and machine learning techniques. As in this paper, you deliberately “combining predictive decisions in static environments” to a broader goal, I wonder that what will the predictive decisions be like under non-static environments? Thank you!

AlexBWilliamson commented 1 year ago

Hello Dr. Heidari! Thank you so much for sharing your research with us. In your paper you mention a number of advantages of human decision making over that of Machine Learning and vice versa. In your professional opinion, what is the most important advantage that human decision makers bring to the table? on the flip side, what is the single most important advantage that Machine Learning algorithms have over human decision making?

awaidyasin commented 1 year ago

Hi Prof., thanks for sharing your work. I was wondering if your formulation could take into account human (ideological) biases as well. If an individual human policy process is biased, but the observed features are ‘relevant,’ we can get a joint policy that is less optimal than the weighted average of the two. Your paper does talk about different input processing and perceptions for Humans and Machines, but that only seems to point toward behavioral biases. I figured that this might be addressed under consistency (but that concerns random disturbances rather than a more permanent shift in one's behavior).

xin2006 commented 1 year ago

Hi Prof. Heidari, thanks for sharing such interesting work! I am wondering how to understand the power of human decision here. The Machine Learning itself is a combination of human and computer, which means that it is based on human behavior. So for the combining of human and ML in decision making you mentioned in the paper, is it necessary for us to distinguish the additional human that complemented ML, from the embedded human in the ML algorithms? And I am curious whether there is any overlap?

cgyhumble0612 commented 1 year ago

Hi Professor! Thank you so much for sharing us such an instructive and interesting paper. I'm wondering what's the practical fields to apply this Human-AI combined analysis system? I'm extremely interesting about the usages of using this model into finance fields such as quant investment.

sushanz commented 1 year ago

Dear Dr. Heidari, thank you for sharing your work with us! Your research sounds really interesting and I wonder how you would like to deliver those contributions to the reality. As we all know, nowadays machine learning has play a significant role in most of the fields. It becomes inevitable that one day people need to find the break-even point to balance and maintain the relationship between human decision and ML predictive decisions. Would you mind interpreting or expanding a bit more on the topic of how human and ML predictive decisions should be aggregated optimally in your research? Also, what would be the next goal for you in your research exploration?

ChongyuFang commented 1 year ago

Hi Prof. Heidari, thank you very much for presenting your work with us! Could you please elaborate on how the internal processing procedure is conducted?

hazelchc commented 1 year ago

Hi Professor Heidari, thank you for sharing your amazing work with us! It is definitely a very interesting and insightful paper. I'm just curious about the performance of hybrid human-ML decision-making in non-static environments, which need this type of technology the most. What are your thoughts? Thank you!

bowen-w-zheng commented 1 year ago

Hi Prof. Heidari, thank you so much for the talk. Do you think the result would generalize to inference type problem where human might run into computatinal limit and ML will not have sufficient inductive biases?

yjhuang99 commented 1 year ago

Hi Prof. Heidari, This is super interesting work and we are glad to have you at our workshop! I am wondering what are the possible applications of this unifying framework for combining human and ML - could you give some examples for us to know how it works?

BaotongZh commented 1 year ago

Hi Prof. Heidari. Thank you for bring us such an interesting work. It's very insightful about how you define the aggregation mechanisms for complementarity. I was just wondering how can we combine a powerful algorithm with laypeople (noises) ? And how those noises affect the aggregation mechanisms and the performance of the final prediction ?

javad-e commented 1 year ago

Thank you for presenting your work at our workshop! As noted in the paper, one of the assumptions of the study is that the machine makes the final decisions directly. I was wondering what your expectations are in cases where the decisions are made in a more sophisticated approach where a second algorithm or a human intervenes in making or confirming the final decision?

beilrz commented 1 year ago

Hello, Professor Heidari. My question is about the potential application of ML and human hybrid system. What fields you think could particularly leverage this hybrid system? Can it be used to conduct scientific research? Thanks.

jiehanL commented 1 year ago

Hello, Professor Heidari; my question is, as algorithm bias is ubiquitous in AI/ML-related tasks -- such as Amazon's algorithmic system for screening recruiting resumes gives low marks to resumes that contain "women"-related terms, or the face recognition tools of Microsoft, IBM is better at recognizing male, lighter-skinned faces than female, darker-skinned faces -- Do you think combining complementary strengths of humans in the process will reduce or strengthen these biases, as human beings are biased themselves?

yuanninghuang commented 1 year ago

Hi Professor Heidari, thank you so much for sharing your research with us. My question is regarding the misalignment problem between machine learning and human. How would this hinder or facilitate predictive decision making?

edelahayeUChicago commented 1 year ago

Professor, thanks for coming to present your fascinating research to us! My question relates to how the strengths of human decision making vary with neurodiversity. Whilst its easy to homogenise human capacities, research on neurodiversity's contribution to better decision making has shown a significant impact. How could different types of brain's decision process fit into your framework?

shaangao commented 1 year ago

Hi Prof Heidari, thank you so much for sharing this interesting piece of work with us! The paper touched a bit on the topic of decision-making in contexts vs in static environments, and constrain the scope of the current research to decision-making in static environments. I'm curious what your thoughts are about the ways to generalize this framework to potentially more dynamic, situated cognition. As we all know, one of the advantages of human decision-making that ML still struggles with is that human makes decisions in contexts. Even the most fundamental part of cognition -- such as visual cognition -- has been found to be affected by contextual factors such as culture and affects. From this perspective, does human decision-making in static environments really exist, and consequentially, is a comparison of human vs ML decision-making assuming static environments really possible/plausible?

LynetteDang commented 1 year ago

Dear Professor Heidari, thank you so much for sharing your work with us. I am wondering how hybrid human-ML decision-making systems should be achieved based on the implications from this piece and how much the human factors should be involved. How would you handle the biases?

ZenthiaSong commented 1 year ago

Dear Professor Heidari, thank you for your valuable time to share your recent research with us. Can you tell us about the biggest challenge you encountered while conducting this research? I am also curious about why you confined your experiment to a static environment.

yuzhouw313 commented 1 year ago

Hello Professor Heidari, Thank you so much for presenting your research and I hope to learn more from you this Thursday. It is enlightening to consider the hybrid functionality of Machine Learning and human intelligence, as I was always under the impression that the former outperforms and replace the latter. However, from your research I discovered that not only do human make inconsistent decision, machines can also suffer from flaws such as target label bias you studied. I am curious if you could elaborate on the advantages and disadvantages of human and machine learning? And also how to integrate them to maximize accuracy and efficiency of the prediction?

C-y22 commented 1 year ago

Hi Prof. Heidary, Thank you for sharing your work with us. A big difference of the predictive ability between human and machine learning is that the expertise of human has accumulated for long time compared to ML are trained with prior models. The information technology has provided ample instances for the purpose of ML training, how would this influence the ML current research trend and would it be possible that the differences will be eliminated?

yiang-li commented 1 year ago

Hi Professor Heidari, thanks for presenting the research, and I am very much looking forward to hearing your talk this Thursday. I wondered what would be the ethical considerations when we compared ML training against human intelligence?

koichionogi commented 1 year ago

Dear Prof. Heidary. Thank you so much for your research. I would like to ask what another potential research approach would be to capture another human-ML collaboration paradigm, like one that focuses on "sequential decisions under resource constraints."

nswxin commented 1 year ago

Dear Professor Heidary~ Thank you for coming. I am wondering whether there are specific fields of research that fit the aggregation of human and ML predictive decisions.

JunoWuu commented 1 year ago

Hi Dr. Heidari,

Thank you for coming. It is really interesting and helpful for us to see what exactly can machines do better than human and vice versa. You mentioned that the proposed in your work should not be applied to the cases when human makes the final decisions. However, at the end human are prone to mistakes that machine can avoid. Do you think it is possible to replace that with the human-ML decision-making model? Is it is possible, what are still lacking about the model that we still need to improve?

kuitaiw commented 1 year ago

Hi Prof Heidari, thank you for your paper. And I am very interested in the decision-making of human-ML complementarity. What's more, I find you did research on predictive decisions in static environment. But I think somethings may change in dynamic environment. So could you show predictive decisions in dynamic environment?

zhiqianc commented 1 year ago

Hi Professor Heidari, thanks for coming and sharing your work. What do you think about the future of this human-ML system? What kind of role do humans play in this? Will ML completely take the place of humans when making decisions someday in the future?

JoeHelbing commented 1 year ago

Hello Professor Heidari,

I'll be honest in not feeling entirely confident in my understanding your paper, but the concepts you spoke about were certainly interesting. I guess my real question would just be one of clarification, in the combined human and machine learning model, there were scenarios where the combination was better and some where it was worse? I understood that where both machine learning and human decision making were compromised by inconsistency or bias the combination was better, but assuming the ML was competently trained?

Jack-Lu-attic commented 1 year ago

Hello Professor Heidari, I believe combining ML and field knowledge can lead to a fascinating outcome, but my question is that if this is the trend of future ML development, would this undermine the generality of ML? That is, the efforts in ML will shift dramatically to adopting field knowledge instead of developing a more general algorithm that can be used under many circumstances. Thank you.

GabeNicholson commented 1 year ago

Are there differences in RL machine learning models compared to supervised ML models when analyzing the strengths of human-computer interaction? Because RL models learn from trial and error along with examples, it seems to me to be a much better combination than plain supervised Learning models and humans interacting.

tangn121 commented 1 year ago

Thank you for sharing your work with us Prof. Heidari! In this paper, you built the framework based on a specific paradigm of hybrid human-ML decision-making, which is combining predictive decisions in static environments. I wonder if you are planning to work on other paradigms. If so, what is your future plan?

ddlxdd commented 1 year ago

Thank you for sharing your amazing research with us. I have a question about the relationship between prediction and decision-making, I know that it is a future direction to study the case in which prediction does not lead to direct decision-making, but I am a little confused about whether prediction can be directly transferable to decision at the first place, since the prediction is a middle process when our brain is trying understand the situations, and the decision comes from analyzing and modifying the prediction we made through our brian.