uchicago-computation-workshop / Spring2022

Repository for the Spring 2022 Computational Social Science Workshop
5 stars 3 forks source link

04/21: Serena Wang #4

Open ehuppert opened 2 years ago

ehuppert commented 2 years ago

Comment below with a well-developed question or comment about the reading for this week's workshop. These are individual questions and comments.

Please post your question by Wednesday 11:59 PM, and upvote at least three of your peers' comments on Thursday prior to the workshop. You need to use 'thumbs-up' for your reactions to count towards 'top comments,' but you can use other emojis on top of the thumbs up.

fyzh-git commented 2 years ago

Thank you Serena. The result of over 74% total time saving on reviews for Ph.D. admissions is astonishing, and it does arouse my concern of fairness on speak of the applicants. Here is the paradox: when people talking about fair ML, what type of fairness - or which interest group - are they truly considering. The algorithms saves time while keeping a comparable admission result of the admitted cohort for the department, so yes, it seems fairly efficient from the perspectives of the admission committee. But then how about the groups being rejected? There arises new moral issues to this group, who currently fall behind, but more seriously, would be even more ignorant due to the adoption of ML in the admission procedure, since their efforts spent on preparing all the application materials now turn to be much less likely of even being opened to take a look at. So, the weak group turns out to have even less opportunities - Just like the poverty trap for the poor nations to be even poorer. Is this the fairness that algorithm truly mean to bring to the higher education admission process? And can such adoption of ML algorithms truly promote the aim of education in promoting equal opportunities? One may argue that the algorithm hardly affects the admission results in terms of the admission outcomes. But one moral issue that may be easily neglected here is the saving "wasteful" time of the committee from the rejected applicants at the same time makes them less respectful for the group - the efforts spent on preparing all the application materials could take a long time and energy, and it is at least worthy of being spent a few minutes going over and for some feedback. This is actually a problem in the current admission procedure which the automation could have played a role to make up for. But the ML seems not contributing to alleviate the problem, but simply transferring the time waste from one group to another (from reviewers to applicants). It would not be so helpful for fairness promotion If the ML developers do not or refuse to take into account this huge negative externality. So the question is again, what is the fairness that the numerous fair ML literature has been working on and is ultimately hoping to achieve today and tomorrow? Personally speaking, the automatic system would be great if it could open to the wide applicants population for doing self assessment, but it just might not be so morally acceptable of being used for saving reviewers' time, since it is still not informative at all for the generations that the recruiting department actually hope to help with using their educational resources. The time saved needs to create value, and not just for the committee, but also for the group of larger interest to learn something and improve. So an accompanying automatic feedback generating algorithm is believed to be really beneficial. A growing volume of literature beginning to realize the issues of fairness inherent in the classification and prediction process is inspiring. But what does need a second thought is what the truly fairness means. Simply being unbiased, of high classification performance, or preventing human intervention so that reducing subjectivity is not all of being fair. And the problem might be more obvious and easy to understand for the researchers if extends to the review process of academic journal or conference. Imagine ML was adopted for some top journals or conferences to review the submitted papers and proposals - which could take even years for the authors to produce, how would the saved time of the reviewing committee be considered? How would the authors think about the contribution of fairness by the algorithms - would they think they are more fairly treated and their work now receive much more respect? And what is indeed the ML techniques' roles in promoting the fair opportunities for those truly terrific work to be published and known by the public? Inherent issues are usually hard to realize when we place ourselves in the position of judge, but it sometimes becomes much obvious if we place ourselves in the position of the ones being judged. Thanks again for bringing the great work and look forward to your talk.

AlexBWilliamson commented 2 years ago

Thank you for sharing this interesting topic with us! I have a few questions I would like to ask.

  1. Is there any particular area of application for Machine Learning that you believe is especially in need of limitations based on Deontological concepts of fairness? You gave quite a few examples in the second paper given to us, but I was wondering if any of those areas particularly stuck out to you.
  2. You mention that Deontological rules for fairness are compatible with concepts of fairness based on Consequentialism or Statistics. Do you consider Deontological rules to be more or less important as methods of definitions of fairness that these other methods? Alternatively stated, do you see Deontological rules as a supplement to currently existing methods, or should those current methods be used as a supplement to Deontological rules?

Thank you again for taking the time to talk about your research with us.

chrismaurice0 commented 2 years ago

Hello Serena! I really enjoyed your paper on graduate admissions and think the idea of using machine learning methods in admissions is fascinating and promising. Further, I agree with the framing of the GRADE system as a means of informing admissions committees and making the process more efficient. I see the benefit for admissions committees to use machine learning methods to guide their admissions process, but at the same time, it could turn into a slippery slope where schools increasingly rely on algorithms to determine who gets into programs. Is that something you are worried about? What are some of the ethical concerns with this method of graduate school admissions, and how can schools and researchers address them?

JadeBenson commented 2 years ago

Thank you so much for sharing your fascinating research! The idea of applying Deontological rule-based ethics to machine learning is exciting and offers a complementary approach to the consequential approaches that I've primarily seen.

I was curious about in our own research how we might identify these protected features that should have a monotonic relationship?

As you mention in the discussion, this might not apply for addresses/zipcodes that have more complicated associations. Would you recommend creating continuous variables from these features like median income from zipcodes and performing a similar rule-based approach?

I was also wondering how to make these tools more widely available for data scientists working in a variety of contexts - do you think this enforced monotonicity could be added to skitlearn or something similar to spread its use?

Thank you so much and looking forward to hearing more!

Thiyaghessan commented 2 years ago

Hi Serena,

Thanks so much for coming all the way down! I had a couple of questions regarding your paper on monotonicity constraints.

How do we decide on accountability when models go wrong? For example, if we use ML models to evaluate job candidates and the model is for some reason penalising older applicants, who is held accountable? The individuals who built the model or the company using the model? Additionally, how can the untrained individuals who are going to be subject to these models' judgements determine if they were subject to a fair evaluation?

One solution would be to insist that such algorithms are made transparent but such a demand is unlikely to be well received. Alternatively, I was thinking that documentation for any model should be made open-access and include information pertaining to the motives, funding sources and sampling strategy for the training data. Such contextual clues are really important to understanding the interests of the actors involved in a model's creation.

ValAlvernUChic commented 2 years ago

Hi Serena!

Thank you for the paper!! I especially enjoyed the paper on deontological ethics. One question I had was regarding the process through which we can decide to translate these deontological principles into model decisions. The cases seen seem to work on a posteriori observations of the data - i was wondering what sort of a priori considerations we should have such that we wouldn't have to respond to models that have already been applied to people. Also, I imagine that this approach risks reducing what could be multidimensional problems to an intuitive understanding of how things should be which could implicate communities whose activities are merely symptoms of wider social inequalities.

borlasekn commented 2 years ago

Thank you for sharing your work with us! I had a couple of questions in regards to the paper "GRADE: Machine-Learning Support for Graduate Admissions". Do you think that applicants should be told ahead of applying that their application will be screened by an AI model? Also, realistically, what do you think are implications for similar models for undergraduate and master's programs admissions? Thanks!

bowen-w-zheng commented 2 years ago

Hi Prof. Wang,

Thank you for presenting your work. Not sure if this question is too tangential but in the discussion section of the monotonicity shape constraints paper, you mentioned that one of the advantages of this criterion is that it is easily explanable to laypeople. Do you think notions of algorithmic fairness should be accessible to laypeople in general? Thank you.

MkramerPsych commented 2 years ago

Dr Wang,

Thank you for sharing your work with us! As someone who just completed the PhD admissions process and constantly gets asked about how to best optimize applications for graduate admissions, I find your GRADE work highly salient and the results of your model largely in agreement with my own understanding of admissions decisions.

I am really curious to hear your thoughts on the potential generalizability of your model to other fields that purport to use a "holistic admissions process". In my own experience applying for neuroscience and psychology programs, many of the metrics used in GRADE (GPA, test-scores, rec letters) are considered "first-level" criteria, and that they do not help to differentiate between candidates with the bare minimum qualifications and those most likely to succeed. Could this model potentially be extended integrate information from submitted research and candidate statements to attempt to assess the fit between student research and a particular program?

javad-e commented 2 years ago

Thank you for sharing your research projects at our workshop! There have been a few instances of failure and significant backlash against AI-assisted decision-making in recent years. For example, Amazon had to stop using its AI recruiting tool due to suboptimal decisions and biases. Another example is the failure and biases of the COMPAS risk assessment algorithm, which is being tested by the judiciary in the United States (https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing). Of course, as a student of computational social science, I am hopeful about the future of these tools, but what mistakes do you think the brilliant minds behind these technologies made? Was low-quality data the only problem? And what measures must be taken to avoid similar errors in the case of GRADE?

Qiuyu-Li commented 2 years ago

Thank you for bringing this fascinating topic to our attention! The result of a total time savings of over 74 percent on evaluations for Ph.D. admissions is astounding, and it raises my concerns about fairness on the part of the applicants. In addition, I’m concerned about whether your study would inspire a tutorial for standardized college enrollment that may adversely affect future students’ development. For example, they may invest a lot only on aspects increasing enrollment possibilities.

jsoll1 commented 2 years ago

I have a couple questions about the Grade paper, mostly centered around morality of use. My primary point of concern is that the main source of errors is rating middling candidates as low quality. That's not an issue now, as the human readers are taking priority. But as we use this technology, we should try to make sure that we aren't consigning passable candidates to oblivion. If successful we can imagine other programs will also implement similar technologies, which are likely to be extremely highly correlated, which isn't great for passable candidates that are hurt by the algorithm. How can we avoid this kind of outcome?

a-bosko commented 2 years ago

Hi Serena, Thank you for sharing your research with us! It was very interesting to read about the application of machine learning towards graduate school admissions. I believe similar ideas have been applied within finance industry and with job applications. As others have mentioned, other companies have had to rethink the usage of AI for the decision-making because of biased outcomes. Are there any limitations in this study that may result in possible biased outcomes? Also, can GRADE be generalized beyond the UTCS program admissions? For example, other programs have different acceptance rates and qualifications that they consider.

taizeyu commented 2 years ago

Hi Serena. Thank you for sharing the research with us. There is no absolute fairness. So I think we can not assert that this Algorithmic fairness can be used anywhere. Therefore, I want to know if there are some standards or criteria to help us to know if we really need this method> And if there is some other applications of this research instead of admissions process

pranathiiyer commented 2 years ago

Hi Serena, thanks for your paper! I had a couple of questions.

  1. For processes such as decision making, while we as researchers aim to make our models unbiased, what do organizations consider a more realistic metric for these ML algorithms? Ones that replicate their decision process (the model would be biased if they want to exclude certain cases), or ones that overlook that bias. I guess my broader question is that unbiased algorithms do tend to be a desired approach but how does this change when it comes to actually substituting them for human like behaviour?
  2. Who do we hold accountable for when ML algorithms fail? Is it the people who chose to adapt them, those who designed them, or just the machine? Especially for decision making and policy decisions. Thanks!
FranciscoRomaldoMendes commented 2 years ago

Hi Serena thank you for coming

  1. How opaque should an algorithm that ensures fairness be? If the algorithm isn't explainable then maybe it's impossible to verify if it's fair.
  2. If the entire admissions system is automated, then is it possible that due to various natural factors a different kind of bias may creep in? I.e. create a "perfect" CV based on knowledge of said algorithm
sabinahartnett commented 2 years ago

Hi Serena- Thank you so much for sharing your work with us! I found the GRADE implementation especially interesting. As someone who worked in an admissions office, I'm especially interested in the impact of this model on the work of an admissions council. I was surprised that the statement of purpose was found to have no impact on an applicants acceptance- I'd be curious for you to share some of the responses to this result, and others, by the graduate council? Since PhDs are often based on individual relationships between the advising professor and student perhaps there was significant variability in the vectorized texts which were ultimately accepted? Secondly, I'm wondering how this model compensates for changes in faculty / goals of the institution (which would likely parallel changes in the field), as the department and overarching institution evolve, the model would likely continue to suggest candidates that match outdated research topics / goals. Is there room in this model to compensate for existing biases / innovations in the admissions process?

LFShan commented 2 years ago

Hi Serena,

How to ensure fairness in ML algorithms that usually work like a black box? If the result shows that the model is biased? what could be the potential ways to fix it? Or are those ML algorithms cannot be easily "fixed" to avoid bias.

Thank you

xxicheng commented 2 years ago

Thank you so much for sharing your fascinating work with us, and for provoking more thoughts and discussion around ethics and fairness issues of ML algorithms. It is truly a raising concern with the recent flourishing and broadened applications of AI, ML, etc., in different scenarios. From reading the three pieces on adjusting potential discriminations, the criteria of "fairness" seem quite different—for example, age discrimination, racism, gender bias, etc. I am wondering how your team decided the "fairness" before trying to eliminate the ethical issues of the algorithms? What if the criteria adopted are biased or partial in the first place? For example, the age discrimination in the job experience, could it be possible that this is more likely to happen to the gender minority group? I did not see your team checking on that in the paper. Is there a systematic way to decide whether and how an algorithm is biased?

isaduan commented 2 years ago

Thank you for sharing your work with us! I wonder whether you have thought about how to make AI/ML truthful, i.e. being explicit of its bias and not hiding them in its thought process?

william-wei-zhu commented 2 years ago

Thanks Serena for your talk. Ideally, a fair and equitable system provides additional resources and opportunities for members of the disadvantage group. It shows them that if they work hard, they have a good chance of achieving a desirable outcome. Meanwhile, a problematic "fair" system penalizes the advantaged groups for getting ahead without providing additonal support to the disadvantage groups, resulting in resentment and social conflicts (e.g. communism). How can "fair" algorithms ensure that they are more aligned with the former rather than the latter type of "fairness"?

Yaweili19 commented 2 years ago

Hi Serena,

Thank you for sharing this interesting piece of research, and your results, if put into application, could really be amazing. However, as many others, I am also concerned about the ethics in using such black-box-like algorithms in determining humans' careers and more. Could you please further address the issue and what possible computational methods could be used to improve or demonstrate its research ethics? Thanks!

PAHADRIANUS commented 2 years ago

Hi Serena, thank you for sharing the impressive fruit of your research. Questions:

  1. where do you think would be the ideal scenario for applying your proposed fairness method? I can comprehend that instances such as the GRADE admission system could apply your method to improve overall fairness, but given that such systems were trained based on previous human generated records, I wonder if such fairness fixing may alter results to a degree that contradicts human intentions? that is, in the GRADE case, overestimate some applicants' score.
  2. you admit the intrinsic issues in measuring and defining protected groups as well as intersections of groups. How would you filter group labels and discern those that are better suited for the method?
yiq029 commented 2 years ago

Hi Serena,

Thank you for sharing your work with us! Looking forward your work about fairness in ML and its application in practical.

yujing-syj commented 2 years ago

Dr. Serena,

Thanks for sharing your research with us! This topic gives me a different way of thinking of the responsibility of the machine learning or other algorithm. When we talk about the fairness of the ml, we are actually discussing the value of the designer. We should give a very clean defination of which kind of fairness we want before build the model. How can we make sure that researchers are willing to dive deeper into figure out the best model in terms of "fairness" since people could modify their behavior according to the algorithm? Whether this reality will decrease the efforts of researchers to seak for fairness?

TwoCentimetre commented 2 years ago

My question is related to the self-fulfilling prophecy of using the GRADE system during the admission process. Would there be some kind of psychological cue if the committee use such GRADE system as a reference? I notice you describe the GRADE system as "not determine who is admitted or rejected from the graduate program. Rather, its purpose is to inform the admissions committee and make the process of reviewing files more efficient. The heart of GRADE is a probabilistic classifier that predicts how likely the committee is to admit each applicant based on the information provided in his or her application file. For each new applicant, the system estimates this probability, expresses it as a numerical score similar to those used by human reviewers, and generates human-readable information explaining what factors most influenced its prediction." And "while every application is still looked at by a human reviewer, GRADE makes the review process much more efficient." My question is that when the committee uses such system, would there be a kind of self-fulfilling prophecy in this process? Thanks.

hazelchc commented 2 years ago

Hi Serena,

Thank you so much for sharing two pieces of interesting work with us! I particularly enjoyed the GRADE: Machine-Learning Support for Graduate Admissions. I was amazed that the system could reduce the review time by 74 percent. I was just wondering what will be the next step after this research. Would you develop other similar systems in other areas?

egemenpamukcu commented 2 years ago

Hi Serena, thank you for sharing your work. Do you think there is a case to be made for the adoption of simpler and more easily interpretable models where fairness could be a concern even when we can ensure more complex models can perform better with little to no embedded unfairness, just because the verification would be either very difficult or impossible? Also, how do you think data drift would impact your proposed mitigation techniques of fairness? Would we be able to monitor model fairness metrics like we monitor other performance criteria?

nijingwen commented 2 years ago

hi, Serena Thanks for coming our workshop. machine learning is a popular topic in many field. But it is my first time to learn "improving the long term societal impacts of machine learning" and also, I found you are in the team of google research. I am interested in this experience and would like to know how should we join it. In addition, Can you say something more about the project or research of google? Looking forward to hearing from you tomorrow

koichionogi commented 2 years ago

Thank you so much for sharing your work. The work with GRADE is really interesting, and I would like to learn more about it. My question is whether the this application could extend to search fitness of other components, such as research interest, ability, style and personal characteristics, with particular programs. If it could do so, would it be possible for students to know beforehand the fitness or for programs to know what are volume of students' type and how to improve their programs? Thank you again for sharing your research.

ZHE-ZHANG-0213 commented 2 years ago

Hello Professor Wang,

Thank you for showing your work. I have the same question as the above student, do you think the concept of algorithmic fairness should be understandable to the average layman? Beyond that do you think applicants should be informed that they are being screened by AI? (although I'm sure many applicants know this by heart) Also can such algorithms be applied to academic applications?

NikkiTing commented 2 years ago

Thank you for sharing your work! In your first paper, you mentioned the issue of the selection of protected groups. How do you think fair machine learning algorithms affect specific groups that are not captured in the chosen grouping criteria (i.e., “others”)? Also, given the other issues and considerations you noted such as perpetuating inequity, what steps do you think computational social scientists should take to ensure that fair machine learning models don’t do more harm than good?

erweinstein commented 2 years ago

In Robust Optimization for Fairness with Noisy Protected Groups, you and your coauthors note (as examples of why group information might be "noisy, missing, or unreliable)" that people might rationally choose to give inaccurate survey responses due to fear (e.g., of discrimination), and in other situations they might be subject to social desirability bias, particularly when answering questions relating to what used to be called the "Big 3" (i.e., the 3 categories of topics to never bring up in social situations like large family gatherings), religion, politics, and sexuality. I think that the need for quantitative social scientists to purposefully develop tools to address these types of situations is greater than ever, particularly in light of widespread concerns about how opinion polling may understate the strength of right-wing populist movements (e.g., Brexit polling; the purported "Shy Trump Voter" effect) and the persistence and even growth of government-sanctioned discrimination against LGBT people (sadly here the USA too, not just in "illiberal" states like Russia and Saudi Arabia). I'm very glad to see that you and your colleagues have considered these cases, and I'm wondering what it would look like for future work in machine learning fairness to have a stronger focus on social desirability bias and other cases of widespread but not-trivially-easy-to-characterize obfuscation. I certainly know of some social scientists (not naming names) who think that we (the quantitative and computational methodologists) don't pay enough attention to social desirability bias, but I think we're already on the way to proving them wrong. :)

Lynx-jr commented 2 years ago

Hi Serena! Thanks so much for sharing your work, it seems like our students (and myself included) enjoyed the GRADE paper more than other algorithmic papers. My question is how does this model recognize students from exchange programs? And personally, I felt the model is not fair from the student's perspective, especially for students from schools that are not as prestigious. I'm simply glad that MACSS admissions are not using that :(

mikepackard415 commented 2 years ago

Hi Professor Wang, Thanks very much for sharing your work with us. I'm looking forward to the presentation. You mention briefly in the Robust Optimization paper that there is an intrinsic difficulty in dealing with intersectionalism. That struck me as a pretty important point to follow up on, given how important we tend to think that concept is. Do you have any additional thoughts on how to handle overlapping groupings? If you have enough data, is it feasible to accept the smaller groupings that necessarily come from two or more categorizations (say, race and gender)?

LuZhang0128 commented 2 years ago

Hi Serena. After reading the GRADE paper, I wonder if there's any potential bias that can be brought by this algorithm? Could this algorithm potentially favor students from a more monotonous background (e.g. better undergrad school or better social-economic background)? Since we know that, especially for social science, the diversity of the program is important since it brings in different perspectives and insights. Is there any tuning based on diversity concerns?

XTang685 commented 2 years ago

Hi Serena. Thank you so much for your work. Could you please elaborate more on the research of Google? There are any next steps? Thank you!

yutaili commented 2 years ago

Hi Serena, Thanks for sharing your work. The application of ML algorithms in graduate school admission is quite interesting. But my question will be that to what extend do you think the admission office should rely on the ML model? In other words, if the model can evaluate a graduate school candidate based on the same criteria as human? What about the soft skills that candidates have demonstrated by hard to quantified? Thanks.

JunoWuu commented 2 years ago

Hi, Prof. Wang!

Thank you for sharing! In you paper Deontological Ethics by Monotonicity Shapes Constraints, you mentioned that Monotonicity constraints to reduce bias might not work well for variables like addresses and photos or voice signals. However, these variables, if being using with biases, can be really bad. Is there any way that they can also be trained to be less biased?

Coco-Jiachen-Yu commented 2 years ago

Hello Serena! Thank you so much for sharing with us your amazing research. I have some quick questions regarding your studies, mainly focused on the implications and applications of them:

Raychanan commented 2 years ago

Hi Serena, as you wrote that non-sensitive information can be highly correlated with sensitive information, causing indirect discrimination, which is also a problem for statistical fairness measures. Since I personally think this is a common issue, is there any way to mitigate the problem? Thanks!

YileC928 commented 2 years ago

Hi Serena, thanks for traveling all the way to join the workshop! As has been noted in the paper, the meaning of fairness is highly context-dependent. I am wondering about how to predefine features that need to be ‘constrained’, e.g., how to determine if an effect of a feature should be monotonic. Also, are there any metrics that deontologically fair models could be evaluated against?

zixu12 commented 2 years ago

Hi Serena, thanks for coming to our workshop. A fair system is indeed important and I am very impressed by your work. Could you please tell us more about what are the categories to determine whether machine learning is fair or not? Thank you!

XinSu6 commented 2 years ago

Thank you so much for sharing your work! I am also wondering is it possible for the algorithmic fairness to be understandable to the average layman? If so, what should the standards of fairness be like there? Since the definition of fairness can be varied a lot in different disciplines, do you think there should be any universal standards based on the algorithms? And what other fields do you thing AI can screen or help at all? Like any industry or academic situations?

Looking forward to your speech!

Hongkai040 commented 2 years ago

Hi, Serna, thank you for sharing your work with us! I am wondering can we apply the two proposed algorithms(DRO and soft group assignments) to text/images classification and will they have consistent performance for dense/sparse vector representations?

siruizhou commented 2 years ago

Thank you for sharing the work, Dr. Wang! I wonder how the GRADE model would perform compared with a rule-based model in terms of time-saving, prediction accuracy, explainability, generalizability and fairness.

zhiyun0707 commented 2 years ago

Hi Serena, thanks for sharing your work with us! I find the paper on machine-learning and its application in graduate admission fascinating because a good model really will reduce a lot of time that admission officers do not need to carefully read through each of the applicant's profile. In the paper the researchers used the logistic regression for prediction, since the study used data in 2013, I am wondering if researchers want to conduct the research now and there are more classification models, what are the criteria for researchers to choose which model to use?

xin2006 commented 2 years ago

Hi, Serena! Thanks for sharing your work! In the paper "Robust optimization for fairness with noisy protected groups", you mentioned that the crucial research question is whether the fairness notion is reliable when the information is missing. And one example is that when doing some survey, respondents may not be willing to provide their personal information such as the location of their home, out of the protection of their own privacy. Considering the privacy problem for these missing value, I am a little curious how did you view the issue? That is, if the result for the missing value given by the classifier is reliable, would that violates people's self-protection?

YLHan97 commented 2 years ago

Hi Professor Wang,

Thanks for sharing your research with us. I have a question as follows, in the article, you have mentioned that from the two case studies, you have found the robust approaches achieve better true group fairness guarantees than the naive approach. Since I’m really interested in the machine learning applied in real world, but not so much familiar with your research area, would you please provide more real world examples in your relevant area?

MengChenC commented 2 years ago

Hi Serena,

Thank you for sharing your work, it is really exciting to see these methods to mitigate the false penalization and provide a more robust optimization. In the monotonicity shape constraints paper, you proposed using the constraints to remove the unfairness from machine learning models, and they worked pretty well. But in the paper, you did not mention the "constraints" of these constraints; hence, I am wondering if there are any limitations or caveats to leveraging this method in modern ML approaches. Thank you.