Call-for-Code / Embrace-Policy-Reform

Emb(race): Policy reform. Utilize technology to analyze, inform, and develop policy to reform the workplace, products, public safety and legislation.
Apache License 2.0
9 stars 6 forks source link

Problem 3 - Hills 2&3 - Bias in policymaking #2

Open AnupSamantaIBM opened 4 years ago

AnupSamantaIBM commented 4 years ago

Theme: As we are all aware, bias is an inherent human quality that helps us expeditiously process information to make swift decisions. When it comes to policy decisions being made at the highest level of government that impact millions of lives, bias in elected officials should be fully mitigated. Policies should represent what constituents want and need, not decisions based off the bias of their elected officials so they can better manage politics and priorities.

Idea: My hypothesis with bias is that I believe it is possesses a level of consistency that doesn't really change unless there's some real external pressure from other sources. My idea is to find a way to collect all public commentary that elected officials have made about racism, racial justice, and equality for as long back as we can. Correlate the sentiment of those comments with any tangible actions they've taken to combat racism, racial injustice, and systematic racism (e.g. voting record on anti-discrimination acts). Identify patterns and anomalies across successors and party lines over the years. Where exactly can we point to consistency (bias) in legacy, resistant decisions that need to be changed? What can we learn about those who have championed and indoctrinated change?

Uniqueness: I don't think this is that unique. Watson has the ability to pull in both unstructured and structured data sets to help identify where bias is influencing policy and where elected officials are listening to constituents to drive change. Additionally, it's important to understand the process behind policy decisions; we should understand the end-to-end process about the legislation lifecycle (from the time legislation is created to the time it gets passed/does not get passed). There are many opportunities for bias to slip into the process, building influence to drive one-sided outcomes that don't benefit the populous.

Impact if implemented: Once Watson is able to pull in both structured and unstructured data sets to help identify cases where bias is influencing policy, they can be rendered through a dashboard or app where users can dive deeper into the behaviors of the elected official. Imagine a screen that has a one pull down menu of all active elected officials and one pull down menu of all anti-discrimination legislation; beneath it is an empty word cloud and a line graph. When a user selects the active elected official they want to investigate, it will show all anti-discrimination legislation that have been reviewed during their term. After the user selects the anti-discrimination legislation, a word cloud will populate with the key words the elected official has used to talk about it; the graph will lay out the voting record on the legislation. Between the data in the word cloud and trends in the line graph, the user can make an even more educated guess about the role and preponderance of bias in policy making.

proudibmer

daveshack-ibm commented 4 years ago

Hi Anup, I think you have a good idea here.

Referring to hill 3: "Voters can identify lawmakers that have voted on policies that show bias against a group" I think you're actually going a little beyond the hill, which is fine, into analyzing lawmakers' public commentary to look for bias whether it's related to their voting on policy or not. Furthermore you're considering the change over time by examining historical commentary. I think that the change over time aspect of the idea is important.

The impact statement is a bit too general. That's fine, the hardest part is coming up with a concrete wow factor. The hill points towards voters as being the target consumers of the desired solutions. We want the solution to analyze the available information about a lawmaker or candidate and give some kind of indication on bias, right? Along with perhaps a trending better, trending worse, or staying the same. The wow might be links to evidence such as places where the lawmaker's language reveals bias that maybe doesn't show up in votes. (remember, bias in voting is the original hill) Just thinking out loud here, hopefully this inspires you and others to see how it can do even more. :)

In terms of reusable pieces of code, this idea points in the direction of a speech/text analyzer that can look for explicit and implicit bias on the part of the speaker/writer. I think that function will be common across all of the teams / problem statements / hills.