daviddao / awful-ai

😈Awful AI is a curated list to track current scary usages of AI - hoping to raise awareness
https://twitter.com/dwddao
6.92k stars 228 forks source link

Formal criteria for being awful #17

Open daviddao opened 5 years ago

daviddao commented 5 years ago

Can we define a formal set of community-driven criteria for what is considered to be awful to make it to the list? As discussed in #8 use cases in this list can be re-interpreted as missing domain knowledge or unintentional.

Right now this is my rough guiding thought process in curating the list:

Discrimination / Social Credit Systems

Surveillance / Privacy

Influencing, disinformation, and fakes

We should use this issue to discuss and define better and more structured criteria (if possible)

Shashi456 commented 5 years ago

What about research which has a higher chance of being used maliciously? Deep fakes was an application of generative models.

I think one usecase of ai that can't be overlooked is the use of deep learning in situations where bias comes from the datasets created by humans. For example, there was this article which told about an ai being used to decide court rulings, but the issue here was that there has been a history of higher arrests against the African-American community. So how do you make that an ai is fair when the practices aren't ? Does this make sense?

nukeop commented 5 years ago

Since "awful" has many meanings, I'd use three categories: harmful, stupid, and subverted.

Harmful would list only those that were created purposefully as tools for harm and exploitation, in particular those that are attacking privacy. Also, it should be required that there are clear existing examples of prospective list entry being used for evil, not just speculation about potential use, because we can speculate about pretty much anything being used that way. Any tool can be used for good or evil, but that doesn't mean it's inherently evil. A neural network deciding that race plays an important role in predicting recidivism or lifetime chance to be arrested isn't racist or unfair, it reveals an uncomfortable tendention that's nonetheless objectively true. Another case is when you have a bad dataset, but that's not the classifier's fault that the results are also bad, it's just Garbage In, Garbage out.

Stupid include examples of poor security, glaring errors, hilariously backfiring results, and other similar incidents.

Subverted is a category that includes examples of AI that was initially created for a useful purpose, but has been since used for evil, such as fitness trackers that are used by health insurance companies or autonomous driving technologies being used to decrease car ownership and destroy jobs.

Shamar commented 5 years ago

If we look for a formal system to describe the safety of an artificial intelligence, we should start from the Asimov 3 basic laws:

Any system that violates these laws can be easily defined as awful.
Any system that do not provide any warranty about these laws, can be defined as awful as well.

While stated in term or robotics, these law express clearly some core principles:

As outlined in #14 we can match to these principle two aspect of an AI system:

The "why", the human intentions behind their creation, is too complex to know and evaluate, it cannot be stated or observed scientifically, and ultimately not relevant to their awfulness.

For example any autonomous robot that kill a human, is awful beyond doubt. This restrict largely the militar application of AI (if you care... many consider war as inherently awful), but also show how awful are all robotic systems that do not try to grant this principles.

As for the how we have a few preconditions to evaluate the awfulness of an AI

Any problem with this preconditions makes the awfulness of the system unknown, and this is awful by itself.

Once all these condition are met we can simply analyze the quality and the effectiveness of the measures in place to ensure the system does not violate these principle.

For example:

daviddao commented 5 years ago

Cool idea to base rules on Asimov! What do you think about these Guidelines proposed by the Fairness in ML community? Would love to draft some principles for Awful AI that are not too far apart from existing literature

Shamar commented 5 years ago

What do you think about these Guidelines proposed by the Fairness in ML community?

To be honest they look pretty vague, fuzzy, to the point of being useless in practice.
They seem more concerned to limit the impact of ethical considerations on business, to reduce the risk of external regulations by pretending to be good at self-regulation.

Would love to draft some principles for Awful AI that are not too far apart from existing literature

This is not how a research field can progress.
Current literature is full of anthropomorphic misconceptions, it's prone to business interests and subject to the neverending funding needs of researchers.

A little step forward compared to this document and to the vague principles proposed by Google was the recent proposal of Universal Guidelines to inform and improve the design and use of AI.

I signed that proposal myself, but I still think it doesn't address well enough two fundamental issues:

  1. the fundamental need of human responsibility/accountability for the robots/AIs
  2. the impact of the paradox of automation on such fundamental need

We cannot let anybody to get away with crimes that would be punished if done by humans, by simply delegating an autonomous proxy. Otherwise the rich will be instantly be above the Law.

On the other hand, a system that looks right 98% of times might extort a huge trust from a human operator. Beyond the issue of preserving the ability to do the automated work manually, in case of problems or just to verify that it's working properly, there is the issue of preserving the operator's critical freedom.

In a badly designed system, the operator will soon start to trust the automation, he will become a useless scapegoat for all issues that would otherwise be accounted to the manufacturer of the system.

In terms of the Universal Guidelines proposed at Brussels, the paradox of automation lets bad players to build systems that violate these principles through a boring UI. And this is very awful, if you think about it.

Anyway, as you can see, the debate is still ongoing and we cannot give a useful contribution by carefully crafting something that everybody would like. Too many aspects of the matter still need a deeper reflection.