responsible-ai-collaborative / aiid

The AI Incident Database seeks to identify, define, and catalog artificial intelligence incidents.
https://incidentdatabase.ai
Other
172 stars 35 forks source link

Migration to Tag Reports Based on Political Bias #1638

Open smcgregor opened 1 year ago

smcgregor commented 1 year ago

Let's tag all the reports in the database according to their political bias ratings as determined by an outside authority. We don't want to be responsible for determining the bias of different publications. This would subsequently let us annotate reports on incidents in the UI according to the biases of the reporting.

Example source data: https://adfontesmedia.com/ https://www.allsides.com/media-bias/media-bias-chart

Steps

  1. [ ] Adopt an authority on political bias in reporting
  2. [ ] Write a migration to code the reports
  3. [ ] Introduce this dimension into the UI
  4. [ ] Automatically assign the tags to known publications on a continual basis
lmcnulty commented 1 year ago

These classifications inevitably have their own biases (see https://adfontesmedia.com/is-the-media-bias-chart-biased), so I think we should handle this similarly to the taxonomies. Instead of adopting "an" authority on political bias, we should support labels from multiple sources that classify publications by their political bias. This could also potentially help with coverage of non-English publications.

Some other possible sources:

Janetbananet commented 1 year ago

@smcgregor I think this has broader implications with how we are generally structuring tags. I'll bring it up at an Editor Stand-up meeting.

smcgregor commented 1 year ago

@Janetbananet sounds good. I think this one will be entirely programmatic, but with the ability for people to intervene.