Open hk029 opened 6 years ago
亚马逊似乎已经放弃了旨在使其招聘过程自动化的一个人工智能系统。该系统给求职者___打分,从__一星到五星不等,有点类似于顾客在亚马逊网站上给产品打分。
问题在于,该程序往往给男性求职者打五星,给女性求职者却只打一星。据路透社报道,该系统“将含有‘女性的’这几个字的简历置于不利__地位,比如写有‘女子象棋俱乐部队长’这样的简历就会失分”,除此之外,该系统还压低__了上过女子大学的申请人的分数。
这并不是说这个程序患有恶毒____的厌女症____。相反,正如所有人工智能程序一样,它必定被“训练”过,训练方式是向其灌输数据,这些数据表明什么可以被视为好结果。亚马逊自然也向它灌输了过去 10 年里招聘方案的细节信息。大多数申请者都是男性,大多数新员工也都是男性。这个程序学到的是:男人,而不是女人,才是好的候选人__。
这已经不是人工智能程序第一次显示出偏见了。美国司法系统用来评估刑事被告再次犯罪_的可能性__的软件,更有可能将黑人被告视为潜在的累犯_。人脸识别软件对非白种人的面部识别能力较差。而谷歌的一个图片应用程序甚至把非裔美国人标注为“大猩猩”。
所有这一切都应该告诉我们三件事。首先,这里的问题不是与人工智能本身有关__,而是与社会实践有关。
第二,当我们认为机器是客观__的时候,人工智能的问题出现了。机器如何表现,仅取决于对它编程的人们。
第三,在许多情况____下,机器表现更好,特别是在速度至上__的情况下,但我们有对正确和错误的识别能力,以及挑战偏见和不公正的社会手段。我们永远不应该忽视__这一点。
亚马逊似乎已经放弃了旨在使其招聘过程自动化的一个人工智能系统。该系统给求职者recruitment打分,从range from一星到五星不等,有点类似于顾客在亚马逊网站上给产品打分。
问题在于,该程序往往给男性求职者打五星,给女性求职者却只打一星。据路透社报道,该系统“将含有‘女性的’这几个字的简历置于不利penalised地位,比如写有‘女子象棋俱乐部队长’这样的简历就会失分”,除此之外,该系统还压低mark down了上过女子大学的申请人的分数。
这并不是说这个程序患有恶毒malevolently的厌女症misogynistic。相反,正如所有人工智能程序一样,它必定被“训练”过,训练方式是向其灌输数据,这些数据表明什么可以被视为好结果。亚马逊自然也向它灌输了过去 10 年里招聘方案的细节信息。大多数申请者都是男性,大多数新员工也都是男性。这个程序学到的是:男人,而不是女人,才是好的候选人candidates。
这已经不是人工智能程序第一次显示出偏见bias了。美国司法系统用来评估刑事被告再次犯罪reoffend的可能性likelihood的软件,更有可能将黑人被告视为潜在的累犯recidivists。人脸识别软件对非白种人的面部识别能力较差。而谷歌的一个图片应用程序甚至把非裔美国人标注为“大猩猩”。
所有这一切都应该告诉我们三件事。首先,这里的问题不是与人工智能本身有关to do with,而是与社会实践有关。
第二,当我们认为机器是客观objective的时候,人工智能的问题出现了。机器如何表现,仅取决于对它编程的人们。
第三,在许多情况circumstances下,机器表现更好,特别是在速度至上paramount的情况下,但我们有对正确和错误的识别能力,以及挑战偏见和不公正的社会手段。我们永远不应该忽视deprecate这一点。
XMind: ZEN - Trial Version
day41 What's wrong with AI? Try asking a human being
人工智能怎么了?试着去问问人类吧
Amazon has apparently abandoned an AI system aimed at automating its recruitment process. The system gave job candidates scores ranging from one to five stars, a bit like shoppers rating products on the Amazon website.
The trouble was, the program tended to give five stars to men and one star to women. According to Reuters, it “penalised résumés that included the word ‘women’s’, as in ‘women’s chess club captain’” and marked down applicants who had attended women-only colleges.
It wasn’t that the programme was malevolently misogynistic. Rather, like all AI programs, it had to be “trained” by being fed data about what constituted good results. Amazon, naturally, fed it with details of its own recruitment programme over the previous 10 years. Most applicants had been men, as had most recruits. What the program learned was that men, not women, were good candidates.
It’s not the first time AI programs have been shown to exhibit bias. Software used in the US justice system to assess a criminal defendant’s likelihood of reoffending is more likely to judge black defendants as potential recidivists. Facial recognition software is poor at recognising non-white faces. A Google photo app even labelled African Americans “gorillas”.
All this should teach us three things. First, the issue here is not to do with AI itself, but with social practices. The biases are in real life.
Second, the problem with AI arises when we think of machines as being objective. A machine is only as good as the humans programming it.
And third, while there are many circumstances in which machines are better, especially where speed is paramount, we have a sense of right and wrong and social means of challenging bias and injustice. We should never deprecate that.