responsible-ai-collaborative / aiid

The AI Incident Database seeks to identify, define, and catalog artificial intelligence incidents.
https://incidentdatabase.ai
Other
172 stars 35 forks source link

Doubtful Incidents #449

Open smcgregor opened 2 years ago

smcgregor commented 2 years ago

Incident 29 concerns the "tank story," which may be apocryphal. https://incidentdatabase.ai/cite/29

Incident 21 seems to be primarily about performance issues in a competition and is not necessarily an incident. https://incidentdatabase.ai/cite/21

Does not meet current definition and criteria https://incidentdatabase.ai/cite/42

Was designed to be humorous and is in fact humorous https://incidentdatabase.ai/cite/62

kepae commented 2 years ago

Incident 50 concerns the malicious exploitation of an Ethereum smart contract, yet neither the exploit nor smart contract used AI-specific techniques. (By some strict definitions of "AI system" that recognize any autonomous decision-making capability as AI, a smart contract may be an AI system; however, if this applies to smart contracts, it would apply to many if-else programs generally.) https://incidentdatabase.ai/cite/50

smcgregor commented 2 years ago

Hi @kepae thanks for adding it to the discussion here. I have gone back and forth on 50 a few times since its ingestion.

Mechanistically, I don't see a reason to exclude if-else systems (e.g., decision forests are little more than giant learned if-else collections), however the DAO was not a learned function and would not, from a technical perspective, be labeled an AI system. For me, this one turns on the perception, at least at the time, that the DAO was an AI system. In effect, we are erring towards ingesting in the event a reasonable less-informed person could conclude that a system was AI.

I don't believe the current ingestion criteria, which we adopted after ingesting 50, would clearly accept 50, but I am inclined towards amending the criteria to include reports that are erroneously labeled as relating to AI systems since we can have multiple reports that clarify what is happening WRT the incident. Any thoughts there?

kepae commented 2 years ago

Thanks for the reply! I agree with your conclusion to err towards accepting ingestion, if only because the zeitgeist of DAO rallied around the idea of its autonomous nature. I also think this line from the submission criteria page is relevant and insightful:

Algorithms that are not traditionally considered AI may be considered an AI system when a human transfers decision making authority to the system.

I think in the short-term growth of the database's incidents, including reports that are about technology largely perceived as AI systems (even if erroneously) benefits the big-picture view of canonically tracking autonomous software systems that cause unexpected harm or consequences.

I think in a long-term view of a canonical AI incident database, this approach would result in ingesting a great many things. Of course, if the exploit or smart contract involved learned functions or other AI-specific techniques, it would be co-ingested in AIID too.

Perhaps it would benefit to record such an incident initially as something like a "candidate incident." [1] The truthiness of whether or not an AI system is involved in the unexpected behavior is signaled, and the incident can remain relevant (and the reports consultable!) until it is accepted to be certainly not related to an AI system. The fact that it's not clear is interesting to record.

PS -- More generally, is there a way to tag the confidence of ingestion or collect doubts on incidents in the database? Or is this issue the best/only spot? (in anticipation of a feature request to help review these borderline as time goes on)

[1] quasi-AI incident? potential AI incident? near-AI incident? [this is a can of worm that I'm sure has a few other threads...]

smcgregor commented 2 years ago

Regarding, PS -- More generally, is there a way to tag the confidence of ingestion or collect doubts on incidents in the database? Or is this issue the best/only spot? (in anticipation of a feature request to help review these borderline as time goes on)

We just added the capacity to record "editor's notes", which we plan on presenting on the citation pages. Wikipedia does very well here with their "talk" pages that underly the articles. Aviation also has a notion of incident (lower threshold for inclusion) vs accident (higher threshold with clearer mandatory reporting). We just need to adopt the right term here (discussion).

I put forward a starting list of candidate names, “issue report”, “context report”, “case report”, “vulnerability report”, “briefing report”, and “recall report.” Without biasing your response, do you have any favorites among that list (or not on the list at all)?

LKchemposer commented 2 years ago

Incident 85 - likely does not fit current incident criteria, where GPT-3 was manipulated by human editors to exaggerate harm, borderline sensationalism.

LKchemposer commented 2 years ago

Incident 77 - does not meet incident criteria, as harm (i.e., failure to alert police) is not caused by or involved with AI elements of the robot.

LKchemposer commented 2 years ago

Incident 159 - technically a vulnerability report discussing projected harms, regardless of practicality, rather than a report of harms done.

LKchemposer commented 2 years ago

Incident 298 - only projected harms reported

smcgregor commented 1 year ago

Incident 287 - this is a lab test

smcgregor commented 1 year ago

I put all these incidents into this report: https://incidentdatabase.ai/cite/21/#r2471

For our next editor's meeting to discuss...

smcgregor commented 1 year ago

@LKchemposer I didn't move 77 over because the company/police created an expectation that the system would support calling the police and the system did not actually support that function.

npit commented 8 months ago

Highlighting this, given that some incidents (eg 21) are crudely marked as issues in the title.

smcgregor commented 8 months ago

@datherton09 @kepae @Janet-ResponsibleAI Something worth talking about: procedure for reconciling cases that are not incidents upon additional information or change to criteria.

npit commented 7 months ago

Incident 15 -- harm is the result of company policy manually excluding advertising-unfriendly topics, not an outcome of some AI-generated ranking.

kepae commented 7 months ago

Incident 15 -- harm is the result of company policy manually excluding advertising-unfriendly topics, not an outcome of some AI-generated ranking.

@npit I'm not sure I agree -- where is the evidence of a manual policy (I might be missing the particular report)?

One interesting element that makes this closer to "incident" is that there is a lack of transparency about whether an AI system is involved, given that some reports quote representatives claiming the issue was a "glitch" in automatic recommendation systems [0]. The strong perception of a system being involved indicates a failure in the AI ecosystem, if not a technical failure.

Even if this was a human mis-labeling of data that is guide what allowed to be output by a recommendation system, the scale of that system is also responsible for magnifying a particular mistake and its harm.

[0] - http://edition.cnn.com/2009/TECH/04/14/amazon.gay.lesbian.ranking/

npit commented 7 months ago

@npit I'm not sure I agree -- where is the evidence of a manual policy (I might be missing the particular report)?

Here's a relevant passage from the first report:

"In consideration of our entire customer base, we exclude 'adult' material from appearing in some searches and bestseller lists. Since these lists are generated using sales ranks, adult materials must also be excluded from that feature," explained "Ashlyn D" from Amazon's member services department.

npit commented 7 months ago

Incident 5 deals with robotic surgery, with no AI involved.

smcgregor commented 6 months ago

Robotics != AI, but how deep did you go on this one? I would expect there are some elements that (at least classically) would have been considered AI.

npit commented 6 months ago

Read the incident reports and googled a bit. AFAICS, the predominant understanding and usage of 'robotic surgery' involve telemanipulation with specialized tools for precision-focused tasks or remote manipulation with computers for remote operation.

Specifically searching for usage of AI in the field does give some results of AI utilization, some of which have investigative deployments in real hospitals. So I guess it's emergent?

smcgregor commented 6 months ago

This tracks with my understanding, which for these collection incidents means there may be one or more within the study, but the number/count would be misleading even in that instance. I tilt my borderline more towards inclusion if it already has an incident number, but this one is straining that.

npit commented 5 months ago

Incident 86 describes grade calculations, no AI appears to be involved.

datherton09 commented 5 months ago

@npit Thanks for double-checking our work. I revisited the article and reviewed the incident ID. My position is that it does rise to the definition of an AI incident. Here is my reasoning along with quoted sections of the Irish Independent's report on the situation that have led me to my conclusions:

  1. Errors in the algorithms led to significant negative outcomes for students. The report states, "The Department says that a single line of code (out of 50,000) had two errors in it that negatively affected students’ predicted grades. First, the code substituted a student’s worst two subjects for their best two subjects. Then it wrongly added a subject into the equation - the results of the Junior Cycle’s Civic, Social and Political Education. This shouldn’t have been counted."

  2. Insufficient testing of the algorithm led to the situation. The report states, "We know that the code wasn’t sufficiently tested, which is normally a crucial part of any software release. Department officials say that there simply wasn’t enough time to test everything thoroughly due to the urgency of the situation and the resourcing constraints."

  3. The officials in charge did not provide sufficient transparency. The report asks, "How do we know whether the coding error was a basic one or not?" Its answer to this question is, "We don’t. The code - and the implementation of the algorithms - aren’t available to check. In other words, they’re not ‘open source’ or reviewable in the way that, for example, the Irish Covid-19 Tracker smartphone app code is."

  4. The algorithmic errors were discovered after the fact. The report states, "But we do know that the Department of Education and Skills found the second error while performing checks related to the first one. That second error, Education Minister Norma Foley says, was contained in the same section of the code."

  5. The report acknowledges the use of these algorithms in sensitive situations and their controversy, as well as summing up some of the previously mentioned issues. Here are three quotes: "Coding experts say that the decision to use a code-supported calculated grading process in the first place is controversial"; "‘There is a big open problem with these types of prediction systems, whether it be grades, mortgage risk prediction, or anything else,’ said Andrew Anderson, a senior research fellow in the School of Computer Science and Statistics at Trinity College Dublin"; and "‘This is usually called the problem of inscrutability. The algorithm cannot tell you why any prediction should be right. In a normal appeal, the person doing the grading has to justify the grade they assigned and the student gets to see that sufficient care was taken in calculating that grade. With predicted grades, this transparency is sacrificed, because the algorithm can't justify the result. It's just a set of calculations.’"

While the report does not specifically mention AI, I would contend that the variables of this incident fall under the general purview of an AI incident based on what appears to be the attempts to perform complex decision-making, the predictive nature of the application, and the seeming complexity of the error detection and the opacity of the system requiring specialist expertise to audit it after the errors were discovered.

smcgregor commented 5 months ago

Put another way, people often present these systems as being AI regardless of what their internals are. They become AI in the eyes of many when decision making/authority is assumed by the system, then someone alleges it is AI, then it gets included either due to the allegation or under the assumption that it is better to include boundary conditions than exclude them.

Taken to the extreme, a coin flip could then be considered an AI system because you are asking it whether you should take one action or another. Reductio ad absurdum aside, would you want to code such ~incidents within GMF, or exclude them? The GMF coding may be easier than ML-centered systems.

Aside, we may want to consider adopting the OECD update to the definition of AI. This incident would be included depending on your read of the word "infer." I think I would place 86 outside the bounds of "inference" though, in which case we may want to revisit its inclusion.

npit commented 3 months ago

What do you think about 148? Its harm output seems to emerge from false advertising.

smcgregor commented 3 months ago

@npit I think it passes criteria, but the rationale is a bit stilted. The harm is not the advertising, but it is the absence of accessible websites created by website owners relying on an inadequate compliance system.

npit commented 2 months ago

Shouting out incident 181: it features a self-driving car participating in a collision, but the AV is not at fault and reacted correctly (i.e. breaked to avoid the collision).

smcgregor commented 2 months ago

I don't remember why we originally added this one, but all we have to go on with 181 is the company's description. While that description raises some questions for me, in the absence of anyone alleging the company's report fails to express something important, the traffic incident should not have been added to the AI Incident DB as it fails the "butfor AI" test. One thing that could change my mind: The BMW acted illegally, but in such a manner that a human driver would have avoided the collision (e.g., by seeing the driver in the car signal something with their arms).

Perhaps a better question for 181 would be, "why this one?" See also.