KairoiAI / An_Incomplete_History_of_Research_Ethics

The text for https://www.tiki-toki.com/timeline/entry/1753034/A-History-of-Research-Ethics/
Creative Commons Attribution 4.0 International
18 stars 1 forks source link

Conceptualisation 🖊️ Legacy story: Explaining decisions made by "artificial intelligence" systems #88

Open Ismael-KG opened 2 years ago

Ismael-KG commented 2 years ago

Legacy Stories are stories that were conceptualised in September 2021, before the timeline was on Tiki-Toki, let alone GitHub. The story in its current form lives here. And you are very welcome to share any thoughts you have on how this story can be improved by commenting below!

Title

Legacy story: Explaining decisions made by "artificial intelligence" systems

Date or Period đź“…

Hm, bizarrely, quite unsure; https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-ai/.

Elevator Pitch

Guidance to avoid AI systems being opaque.

Justification

Unformed Thoughts

Although a referent in data protection regulation, it has been found to be lacking on various fronts. One of these is the GDPR’s talk of “right to explanation.” In a world where algorithms are designed by and employed in numerous fields, the GDPR’s “right to explanation” requires being able to explain decisions made “by algorithms” in the first place. This is no straightforward task, as algorithms – decision-making processes carried out through computational calculations – often operate as “black box” models; they are complex and not readily understandable. The debate, then, has been between those who find the right to explanation to be a perfectly valid request, and those who find it to conflict with the technological potential of computational algorithms.

Through a legalistic analysis of the debate and the legislation itself, we learn that the GDPR does provide sufficient guidance on operationalising the contentious right to explanation (see Casey et al., 2018). Nevertheless, various official and legal developments have sought to fill the gap between the right and the technical ability to explain algorithmic decisions. For example, the UK’s Information Commissioner’s Office has developed – in partnership with the Alan Turing Institute – guidance for explaining such decisions (ICO & Alan Turing Institute, n.d.).