Open IreneKnapp opened 2 years ago
Also see my worries about "automated" in https://github.com/w3ctag/privacy-principles/pull/136#discussion_r821241811.
I'm not sure how to phrase this though. We usually think of "discrimination" as meaning the inappropriate kind, but it's also discrimination to show a hotel ad to the person who just searched for a plane flight but not the person who just searched for sofas. And it's discrimination to show an Ebony ad to the person you inferred is Black. So what can we say to divide the harmful kind from the probably-beneficial kind?
Maybe it's around the fact that one isn't required to exercise their rights, and so this becomes a right to know when discrimination of any sort is happening so that the person can decide whether they wanted that instance? But that could be overwhelming...
Thanks for raising this, and Jeffrey for providing some of the context.
There are perhaps two slightly different ways that discrimination against marginalized people is relevant to our privacy principles.
First, we might see the ability for marginalized people to be freer from discrimination as a justification for why we care about protecting privacy. We want users to have privacy because there are well-known downstream harms that come from data being collected, shared and subsequently used in discriminatory ways. In this sense, yes, there are nuanced questions about what kinds of distinction are discriminatory or the harmful kind of discrimination, but those might not be privacy questions, just the fact that we know this discrimination happens is a reason why we want privacy. I think we should add this to the early sections of the document (1 and 1.1) about autonomy and the importance of privacy; there is abstract language about this, but it would be good to be more direct about it.
Second, we might see a specific privacy right about data being the ability to be free from some kinds of decision-making. Historically, privacy in the computer age included specific objections around automated decision-making, and that has also been translated more recently into specific laws including the GDPR. I think that's separate from a justification, because it's more of a procedural right about exempting oneself from some processing as part of being able to access, correct or withdraw consent about data about oneself. Maybe the "automated" part of automated decision-making here really is only a historical quirk and we should drop it from the document. But I also think that we shouldn't try to enumerate here every kind of unjust or harmful decision making. We all have human rights about being treated fairly, but not all of those are best enumerated as privacy rights.
Section 2.4 enumerates various rights people may have pertaining to their data. People from marginalized backgrounds often have some concern to the effect of: If I provide this entity with my personal data, are they going to use it to discover my marginalized status and discriminate against me? There are relevant laws in some jurisdictions, plus it's a moral right people might assert even in the absence of a legal framework. It seems like that should be captured here.
I see room for discussion as to what types of discrimination should be covered. Regardless of what I might wish, there are types of discrimination which it probably isn't realistic to expect the w3c to fully condemn. For example, a credit bureau cannot do its job without discriminating on the basis of income.
This is distinct from freedom from automated decision-making (which the current draft already addresses) in that it also applies to decisions made by humans.
This idea came up during a Twitter conversation, and I'm writing this issue in an attempt to capture the thought. Please feel free to ask clarifying questions and I'll try to elaborate. I also take no strong position on exactly what form the proposed right should take; I'm just trying to flag a gap that I see in what's there now.