Closed dmarti closed 1 year ago
(this comment is my own words)
There is a problem that these principles recommend both (1) protecting users from cross-context tracking where the two contexts are on different domains and (2) protecting users from cross-context tracking on the same domain -- but (2) is technically much harder, so partially implementing these principles tends to favor domains with more contexts, therefore larger companies.
My .02:
Many of the values-based discussions at W3C have lofty, aspirational goals -- enhancing privacy, preventing abuse, and so forth. However, as you point out the tool we use (technology) has limited capabilities to regulate some behaviours or even discriminate between them.
That means that these goals are implicitly preceded by the clause where possible. Where possible, we will improve privacy. Where possible, we will prevent (or mitigate) abuse. And so on.
In the case of cross-domain tracking, improving privacy is not only possible with technology; it's hard to ignore that the privacy issues are enabled by technology under our (i.e., the standards community + implementers') control. Much of the discussion that's happening now is about how to do so in a way that minimises undesireable side effects.
Same-domain tracking is much less amenable to technical limitation by standards, as you and many others note. That's OK; there are other regulating forces in the world beyond technical standards, and those (increasingly active) regulators often have advantages where we do not. It isn't on our shoulders to prevent all abuses of privacy; our focus should be on those we can.
It certainly doesn't mean we can't make real improvements to privacy without solving every other privacy problem on the Web simulutaneously (knowing some are nigh impossible).
There's also an effects-driven argument lurking in there: roughly, "because this change will have the undesireble effect X, we shouldn't make it." We should think very carefully before shifting from a principles-based approach to an effects-driven one. Such appeals often assume that the current state of the world is "natural" or "right", when in this case it is entirely constructed (by browser vendors almost 20 years ago).
Sorry, @mnot, somehow I failed to link the PR that we think should close this. Does #227 make sense to you?
Think so, thanks.
That means that these goals are implicitly preceded by the clause where possible. Where possible, we will improve privacy. Where possible, we will prevent (or mitigate) abuse. And so on.
In the case of cross-domain tracking, improving privacy is not only possible with technology; it's hard to ignore that the privacy issues are enabled by technology under our (i.e., the standards community + implementers') control. Much of the discussion that's happening now is about how to do so in a way that minimises undesireable side effects.
Same-domain tracking is much less amenable to technical limitation by standards, as you and many others note. That's OK; there are other regulating forces in the world beyond technical standards, and those (increasingly active) regulators often have advantages where we do not. It isn't on our shoulders to prevent all abuses of privacy; our focus should be on those we can.
Concretely, I agree that wishful-thinking-like phrasing should be limited as voluntary technology standards can go as far as they really can, which is - not in the territory of actually mandating policy/legal requirements. So, I agree (and even have a more lengthy article on a related topic (values & tech), if someone finds it of interest, which is not necessarily all on topic but may have some informative values concerning values-vs-tech debate)).
BTW, regarding this:
However, the document makes an unfounded claim that information used to “predict” behaviour, or provide desirable information to “influence” behaviour, somehow magically transforms into a force that enables software to remove free will and all decision-making ability by “control[ing] people’s behavior.”
It's not just the TAG that makes this point; the UK government's Centre for Data Ethics and Innovation says that
Online targeting has helped to put a handful of global online platform businesses in positions of enormous power to predict and influence behaviour.[^1]
[^1]: Review of online targeting: Final report and recommendations (Centre for Data Ethics and Innovation, February 2020).
@mnot It's worth noting as well that Cory is cited here out of context to give the impression that he might not be in favour of improving privacy… The original piece is about AI targeting snake oil. The concern in the principles is evidently broader, notably based on all the work on data-driven hypernudging.
This is section 1 from the document "Commentary on the draft W3C Privacy Principles" from Movement for An Open Web, attached.
MOW agrees that privacy should not be used as a weapon by large entities to favour their own businesses through discriminatory practices by alleging that end users must be protected from the influence of so called third-parties.1
MOW further agrees that larger entities tend to have more power relative to both people and markets, and as such, must be better protected from abuses they may perpetrate given this situation.
“One of the ways in which the Web serves people is by protecting them in the face of asymmetries of power, and this includes establishing and enforcing rules to govern the power of data....”
Unfortunately, the document goes on to state the following.
It is well known that Google and Apple collect and process vast troves of personal data from individuals’ interactions with rival online businesses via their OS, app stores and browsers.3
However, the document makes an unfounded claim that information used to “predict” behaviour, or provide desirable information to “influence” behaviour, somehow magically transforms into a force that enables software to remove free will and all decision-making ability by “control[ing] people’s behavior.”
A position MOW supports is summarized by Cory Doctorow, a leading online digital expert:
The authors of these Privacy Principles fortunately understand the dangers to individuals of increased centralization that technical standards can foster, and the benefits provided by competition.
Thus, when designing standards, we must ensure we understand the risk to
For any policy to have a chance at improving the situation, it is critical to understand both the problem we are addressing and how well we predict the remedy would address it.
The Principles claim to provide a “trustworthy platform” that supports a “Web for all,” which would require it to work for individuals and organizations of all sizes with which people interact. Accordingly, if a proposed principle “only benefits powerful, large entities that control both an implementation and services,” this would be “harmful to the Web.”7
Accordingly, we must analyse each principle listed as to whether it benefits online organizations of all sizes or (even unintentionally) favours larger entities and discriminates against smaller ones.
2022.05.31-MOW-response-to-the-W3C-Privacy-Principles-EB.pdf