Rachealgit / Coursework-Planner

Your coursework planner
0 stars 0 forks source link

[PD] Using critical thinking to remain inclusive as a developer #81

Open Rachealgit opened 1 month ago

Rachealgit commented 1 month ago

@sairaheta1 cloned issue Migracode-Barcelona/Module-HTML-CSS#21 on 2024-07-17:

Coursework content

Apply Magic Sauce is an alternative machine interpretation of one’s personality based on the analysis of how a person writes a sentence in an email or the types of content they like on social media. This is great when it gets the personality traits 99% accurate. But what if it goes horribly wrong and ruins lives? Choose one thing that Apply Magic Sauce can do that you don't like (or like the least). Describe what it is and say why you don't like it in 250 words. Below are three scenarios for you to contemplate:

  1. What if the software wrongfully judges and jeopardises a person’s career without their knowledge?

  2. Are humans unable to think for themselves and do they have to rely on machines to form a non-holistic opinion?

  3. Are civilians expected to be on guard at all times and be careful of what they say and how they comment a post or tweet in a censored cyber world where everything is traceable and nothing is ever really erased?

Estimated time in hours

1.5

What is the purpose of this assignment?

This assignment encourages you to critically think about machine interpretation and its potential implications.

How to submit

Share the link to your Google doc on your ticket on your coursework board.

Rachealgit commented 1 month ago

One concern I have with Apply Magic Sauce is the risk of wrongfully judging and jeopardizing a person’s career without their knowledge. This software analyzes someone's online behavior—such as the way they write an email or like content on social media—and uses that data to predict personality traits. While impressive in theory, it becomes deeply troubling if these predictions are inaccurate or biased. Imagine an individual being passed over for a promotion or even fired because their online activity was misinterpreted by the software as "irresponsible" or "unreliable." What’s worse is that the person may never even know that their personality traits were analyzed in this way, nor be given the chance to explain or correct it.

The problem lies in how opaque these systems can be. Unlike human evaluators, who might consider context, reasoning, and personal interviews, the software reduces complex human behaviors into data points, potentially leading to dehumanizing outcomes. A miscalculation in such an algorithm could lead to unfair career setbacks, especially if companies start relying on this type of technology to screen applicants or assess employees. It is particularly concerning because the person affected might have no recourse to challenge the result, creating an unjust system where the stakes are high but the accountability is low. Trusting machines to make such critical decisions with minimal transparency can easily become a tool for harm, rather than innovation.