kayali436 / Coursework-Planner-CYF

Your coursework planner
0 stars 0 forks source link

[TECH ED] Apply Magic Sauce #166

Open kayali436 opened 8 months ago

kayali436 commented 8 months ago

From Module-JS2 created by SallyMcGrath: CodeYourFuture/Module-JS2#52

Link to the coursework

https://applymagicsauce.com/demo

Why are we doing this?

Companies are very interested in the data provided by software like Apply Magic Sauce. Automated language analysis is already being used in the hiring of personnel. Applicant Tracking Systems (ATS) are used in 97% of Fortune 500 companies.

Apply Magic Sauce is a machine interpretation of personality based on how a person writes a sentence in an email or the types of content they like on social media. This is great when it gets the personality traits 99% accurate. But what if it goes horribly wrong and ruins lives?

Choose one thing that Apply Magic Sauce can do that you don't like (or like the least). Describe what it is and say why you don't like it. (250 words)

Below are some starting points for you to think about:

  1. What happens when human beings rely on software to make consequential judgments about human beings?
  2. Are algorithms more or less biased than people?
  3. What are the consequences of surveilling people in this way? How does this affect how people talk and act online?

Maximum time in hours

1.5

How to get help

Undertake the demo at home and then discuss this in class. Developers should be literate citizens of the internet, and understand the consequences of gathering and analysing personal data. You may also like to look into some short courses on GDPR on Udemy.

How to submit

Post your analysis in your class channel thread, and join the discussion. Post a link to the thread on this ticket.

kayali436 commented 8 months ago

Apply Magic Sauce, a predictive analytics tool developed by Cambridge University's Psychometrics Centre, raises concerns about the implications of relying on software for consequential human judgments. One significant issue lies in the potential reinforcement and amplification of existing biases within algorithms. While algorithms have the potential to process vast amounts of data quickly, they are not immune to biases inherent in the data they are trained on. If historical data used for training contains biases, the algorithm can perpetuate and even exacerbate those biases, leading to unfair or discriminatory outcomes.

When human beings rely on software like Apply Magic Sauce for consequential judgments, there's a risk of abdicating critical decision-making to opaque systems. This lack of transparency can make it challenging to understand how specific conclusions or predictions are reached, making it difficult to hold the system accountable for any errors, biases, or ethical concerns.

Moreover, widespread surveillance and data collection for such predictive tools raise privacy concerns. Continuous monitoring of individuals' online behavior can have a chilling effect on free expression, as people may become more cautious about what they say and do online, knowing that their actions are being scrutinized. This surveillance can contribute to a culture of self-censorship, hindering open discourse and the free exchange of ideas.

In essence, while predictive analytics tools can offer valuable insights, their use in consequential decision-making requires careful consideration of ethical, transparency, and privacy implications to ensure fairness and accountability.