patcg / proposals

This repository is to discuss proposals before they've been spun off into their on repository.
Other
19 stars 4 forks source link

Why would notice and consent not be adequate? (Notice and consent debate) #5

Open jwrosewell opened 2 years ago

darobin commented 2 years ago

A person's autonomy is their ability to make decisions of their own volition, without undue influence from other parties. People have limited intellectual resources and time with which to weigh decisions, and by necessity rely on shortcuts when making decisions. This makes their preferences, including privacy preferences, malleable and susceptible to manipulation. A person's autonomy is enhanced by a system or device when that system offers a shortcut that aligns more with what that person would have decided given arbitrary amounts of time and relatively unlimited intellectual ability; and autonomy is decreased when a similar shortcut goes against decisions made under such ideal conditions.

The failure of notice and consent regimes is widely documented (as has been pointed out to you several times over). Here is a very cursory list from the primary literature:

I have spent way too much time reading on this topic and I am not aware of a single study showing that notice and consent is ever effective except in very specific cases that do not include persistent capture.

jwrosewell commented 2 years ago

And yet GDPR allows for and provides guidance on consent.

All the businesses that are affiliated with people in this group rely on consent to conduct their digital business.

On what basis would this group decided that a recognised law concerning consent was not good enough? This is a fundamental question.

darobin commented 2 years ago

If extensive scholarly investigation spanning decades and leading to peer-reviewed scientific consensus published in top-tier journals does not convince you, I don't see what will. It's not the job of this group or any other group to deal with tactics that amount to climate change denialism.

Furchin commented 2 years ago

Today's meeting made a broad unsupported assertion on the topic that consent and notice is not adequate and I also wanted to learn more about the premise behind this assertion. Responding with a list of sources -- no fewer than four of which literally use notice and consent themselves in presenting the information -- and then comparing the asking of a reasonable question to climate change feels like a poor approach to starting a productive conversation.

I'd love to learn more about what would be considered sufficient, but I also don't want to be vilified for asking the question.

darobin commented 2 years ago

It's great that you want to learn more about the problems with notice and consent regimes. I didn't just provide a list of sources, I provided a short summary of the issue and a list of sources to back that up for people who have a sincere interest in digging into the problem themselves. I'm not exactly sure what more you expect beyond a concise answer and a list of further reading.

I have to point out that it's not four of the sites hosting those sources that relies on notice regimes; it's all of them. They don't do that because it provides better privacy, they do that because they are legally required to. In many legal regimes, such as the US and the EU, it's the primary way that companies have to protect themselves against privacy claims of their users.

No one is being vilified here. James has asked the question many times and has systematically ignored the research. The comparison with climate change denialism stems from a pattern of interaction.

If your question is sincere, which is my starting assumption, that's great. However, I'm not sure that I have a better plan to offer you than either to trust those of us who've done our homework in order to work on this — and found that it does not work — or to do your own reading and review of the literature, for which I have provided some pointers.

This isn't a novel problem, it's a topic that has animated the field for about fifty years now. It's the reason why newer proposed laws are moving away from it (often keeping some notice, typically a privacy policy, but replacing consent with more effective mechanisms like outright prohibitions). To take an example that I just happen to have in a nearby tab, there's Consumer Reports' model law that has a quick section on the ineffectiveness of consent.

jdelhommeau commented 2 years ago

@darobin you seem to assume that notice and consent regimes can't coexist with effective mechanisms. I don't believe this to be the case. I think that while Privacy Enhanced Technology are definitely a good thing for users, that doesn't mean notice and consent are no longer required. Foundation of privacy under GDPR is transparency and control. If you build a complex system which eventually prevent bad actors from doing things they weren't allowed in the first place, but the user doesn't understand it, I don't think this will help in the long term for restoring user's confidence in the internet. That said, I imagine we could argue about the above and not reach consensus, so not a great way to spend time and resources. However, even if we don't necessary agree on the above, it remains that we must work in the scope of the law. As you said so yesterday in the call, a standard must be more restrictive than laws, otherwise no one would, or at least should use it. In EMEA, device access require consent and notice. Personal Data Processing is more flexible with regards to legal basis, but if we assume either consent or legitimate interest, both require at the very least transparency and control. As per @martinthomson presentation yesterday, if we start with the assumption that those new mechanisms should be "on by default with opt-out", then we are designing solutions that will not work for EMEA. I don't think this is right. And we can't assume that regulation will eventually evolve (ePrivacy Regulation) to meet our needs. We must work within the existing constraints, and eventually adapt our solution as law evolve.

notImposterSyndromeIfImposter commented 2 years ago

I don't think the argument is that we shouldn't gather consent or notify users. The argument is that the current state of consent gathering is terrible and insufficient unto itself. In practice it is people clicking a bunch of stuff they don't understand to get to the content they want.

I think the discussion of "on by default" can be separately debated and it's a good call out that in some jurisdictions might be a non-starter.

anderagakura commented 2 years ago

@darobin The papers you've shared are interesting, really. For example, from the first one "a method of privacy regulation which promises transparency and agency but delivers neither" below is an interesting part :

Fundamentally, “notice and choice” is a misnomer when few privacy notices offer sufficiently meaningful information capable of influencing the user’s ultimate decision, and when a choice of whether to accept all the terms offered or simply seek a different product is often no choice at all. Notice and choice has been roundly criticized by policymakers, academics, social scientists, advocates, and others for quite some time, and with good reason. The idea that a generic description of a company’s practices could possibly provide a sufficient disclaimer as to what data a company collects and how the data is used begs credulity; considering that the description is generally written in ten-point font and inscrutable legalese, is buried on the company’s website, and is one of an unmanageable number that individuals encounter in a day, the proposition is laughable. People encounter so many privacy policies in their daily lives that it would be irrational to read each of them—one study calculated that it would take the average person 200 hours per year. There are also all kinds of cognitive phenomena that prevent individuals from obtaining meaningful information from privacy policies in the way that a notice and choice regime assumes they do, such as hyperbolic discounting and optimism bias.

The researches show many people indicates there are issues with "notice and choice". True indeed. During the meeting, it has been recalled that the purpose of this group is to be focused on technical ideas and apply the laws (not focus on writing legislation or policy). True as well. But, we have the opportunity to shape (re-shape) products all in the scope of the law to protect user's privacy. People do not understand the cookies, too much pages to read/understand, sometimes really technical etc...

As it's our role to help the advertising ecosystem to evolve, I think it's also our role to make sure the user gets all the information (not simple for sure but we have to) via a clear, transparent, readable, accessible and controllable mechanism. Otherwise, some areas could face some issues for using them (e. g. EMEA with GDPR) and we will reproduce some issues from the past.

lknik commented 2 years ago

Simple question.

Assuming that we have a true privacy-preserving scheme (no personal data processed, then), under what circumstances consent for 'data protection' would be needed? What would be its role, to who it should be granted?

Ps. obviously it's a separate issue from user's autonomy.

jdelhommeau commented 2 years ago

There will always be personal data processing. Just by less people. Taking topic as an example, but same logic applies to all propositions I believe: Before, hundreds of vendors would be tracking users across domains to infer their interest. In Topics API, the browser (so Google Chrome for example) is doing this Personal Data processing. The fact that the personal data processing is happening on the browser instead of a third party server doesn't change the fact that personal data processing is happening, and as such, falls under GDPR. In that case, it means that Chrome will require some legal base (likely consent) in order to do that. Maybe a more recent example, the Belgium DPA ruling against IAB Europe that happened last week. As per this ruling, the ability to evaluate user's choice about their data processing is itself a personal data processing and requires a valid legal base. So there is no escaping GDPR.

Finally, GDPR is only one of the applicable law. The other one, ePrivacy, regulates device access, whether it is for personal data or not. Since most solutions out there (IPA, topics, FLEDGE, PARAKEET, etc.) require device storage (so device access), it means they are all subject to consent under current law.

lknik commented 2 years ago

By topic do you mean Topics API? In this case there's no question that personal data may be processed, and consent is needed. What I meant is actual privacy-preserving tech where no personal data is processed.

I agree with ePrivacy take, though. But it would be tough to decide who should ask about the consent, and where (on a website browsing? Why, if conversion//fetches happen later?) in context of Turtledove. So I'd like to highlight that this will be a big headache (and I know what I'm talking about here), because in terms of "consent" it is necessary to establish WHO should get the consent.

My initial question was more generic, not aimed at any specific proposal.

michael-oneill commented 2 years ago

The company/entity that manages the domain where storage is being used needs to obtain consent. Incessant requests for that should be outawed, and/or protocols developed so browser can detect exempt storage and block the others, unless the user agrees.

jdelhommeau commented 2 years ago

Topics was really just an example, but you can do similar analysis to all solutions out there:

alextcone commented 2 years ago

I have a few observations and bucket them into general, standards, legal concepts, and strategy.

General

Standards

Legal Concepts

Strategy

jdelhommeau commented 2 years ago

I am not sure if some of your notes above are addressed to my previous comment @alextcone, but wanted to clarify my position so it is not mis-interpreted. I am not saying that status quo is good. I believe that there can definitely be an improvement for user's privacy over the web using some of the enhanced privacy technics that have been discussed during the call yesterday. However, my only points is that we shouldn't assume that those new techniques will allow us to bypass the existing laws. So we need to make sure that whatever we come up with can work with the current framework of the law. While I wouldn't risk myself into making broad assumption about GDPR / ePrivacy interpretation, I believe it is fine to assume that device access requires consent under ePrivacy and personal data processing requires a legal base (likely consent or legitimate interest in our position). So unless we have reason to believe 100% that none of the solution we will come up with will require device access, or that legitimate interest will perfectly satisfy any PD processing, starting with the assumption that we can have an opt-in by default with possibility to opt-out likely means those solutions will not be available for EMEA market. At least in the current state of the law.

alextcone commented 2 years ago

My observations were primarily focused on the title of this Issue which asks "Why would notice and consent not be adequate?" Yes, I was also reacting to the binary nature of your legal opinion on ePD and the GDPR, @jdelhommeau, not because it is my opinion you're wrong, but because I think it has the potential to be counterproductive. There's a strawperson being created here that seems to indicate that the prevailing opinion is that privacy/data protective systems, technologies, and standards don't fall under the law. I do not think that is the prevailing opinion. I realize @lknik asked the question, but that question is not representative of what I believe to be the likely conclusions many legal professionals are arriving at at the moment.

Bigger picture, I want us to avoid over-indexing to any one law or legal concept. There are several reasons for that. But the main one is I think what PATCG is trying to do and what other standards initiatives are trying to do can co-exist and, importantly, not break laws.

darobin commented 2 years ago

To add to @alextcone's excellent notes above, I wanted to clarify a few points.

The important point here is that notice and choice does not provide adequate protection and does not enable people to make decisions that correspond to autonomous choice. What this means is that, as a standards group, we cannot create a potentially dangerous piece of technology and then shirk our responsibilities by saying "it's okay, we'll just prompt the user for their consent." Not only is that an established bad practice in privacy, it is also an established bad practice in browser UI. The Web community has been struggling for years, nay, decades with permission systems for powerful capabilities and it remains an unsolved problem. Notice and choice is to privacy what ActiveX prompts were to security. We might just be better off if we don't reproduce the same mistakes we made in the 90s.

image

That we will design systems that are safe without notice and choice doesn't mean that we can magic notice and choice away when it is legally required. Without getting into details, there are definitely potential conflicts between the ePrivacy Directive and some privacy-enhancing techniques. The fact that ineffective laws exist does not lessen the value of producing technology that delivers utility while being private by design. If and when ePD is an issue, we can explore options. One would be to speak to ask legislators to carve out exemptions for specific PET designs. Another, if we can demonstrate credible enforcement in the system, could be to bring an Article 40 Code of Conduct to a DPA or the EDPB that would waive (as is already done in some cases) local storage requirements. It's too early for that now, though, we can cross that bridge when we get there. So @jdelhommeau, I think we are all agreed on this?

@anderagakura I'm glad that you enjoyed Barrett's article, I find her to be a very effective (and funny) writer. I agree with you that we will still need transparency. The point here is not that we should eliminate transparency (or even choice, when it is useful and effective), but rather that we shouldn't rely on transparency or choice to make a system safe.

It's like the Linux Problem: being able to tinker with something is liberating, being required to tinker with a thing before it can be useful is alienating.

I believe that we have consensus on the following statements:

Unless there an unresolved objection in substance, I propose that we close this issue. This point is discussed in the TAG's privacy document (forthcoming) and therefore I don't think that we need to capture it in the PAT-specific principles document.

michael-oneill commented 2 years ago

The consent requirement (as in ePrivacy for storage access, or in the GDPR as a legal basis for processing personal data), does not require a user prompt, it just means the user has to agree before the access happens. That could be as simple as a setting in the browser, such as DNT or GPC (actually its reciprocal). Also, saying "..if and when ePD is an issue" is not a good way to encourage policy makers to create an exemption. The ePD has been around in one way or another since 2002, and the opt-in requirement (for storage-access - which is necessary for tracking) in force since 2011.

Better to assume an opt-in for now, and try and get an exemption later when this has proved itself in practice.

jwrosewell commented 2 years ago

I will close this particular comment thread after the meeting. It has formed a useful "scratch pad" for debate between the meetings. Chairs may wish to include reference for the record.

I do not consider the subject closed as @darobin suggests for at least the following reasons.

There are many stakeholders that have not had an opportunity to contribute to the debate. Particular those in Europe who are underrepresented in the meeting.

Regarding the broader subject of privacy policy.

The TAG document is not yet finalised and there is still opportunity to balance it so that it becomes more broadly acceptable. Issue number 106 is an example of such an issue. I'm concerned about privacy absolutists dictating the direction and the impact on competition of a position that provides internet gatekeepers licence to create information asymmetries.

Ultimately I believe we need to agree a range on the following scale from Model State Privacy Act, embed that range in the charter, and then ensure we stick to it.

image

Thank you @darobin for sharing the documents. I have only skim read this particular document so I’m not in a position to comment on the broader content.

darobin commented 2 years ago

James, this is not a "debate" not matter how much you may use tricks as you have in renaming the issue after the fact to try to create deceptive optics.

The matter of whether notice and choice provides adequate protection is clear. This has been shown to you repeatedly. At no point have you provided the slightest shred of evidence to indicate otherwise. No one in this issue has challenged this fact, people have simply been discussing how that fact interacts with other aspects of the world and helped clarify the position.

As always, if someone has new information relating to the effectiveness of notice and choice, it should be looked at.

I would add that, as you have been told several times previously and as documented above, notice and choice regimes have noted anticompetitive effects. If you had a sincere interest in addressing competition issues, you would consider notice and choice as problematic.

Privacy is the set of rules that govern flows of information. Bringing up "privacy absolutists" is a meaningless term that just serves to stir up FUD — which for the record I note is another the trolling tactic.

jwrosewell commented 2 years ago

FWIW

"Privacy absolutism" - 106

I used "privacy absolutist" to mean someone who advocates for "privacy absolutisim" as explained by @jbradleychen.

darobin commented 2 years ago

Referencing someone else's use of a term does not make it meaningful or any less of a trolling tactic. But you knew that.

jbradleychen commented 2 years ago

For the record, I am still worried about the risks of privacy decontextualized from the ethical web. I apologize that I have not been able to keep up with progress on this draft. I would like to be confident that we are working together towards excellent privacy and excellent safety. Unfortunately such an outcome is not inevitable, and the tone of these last few comments does not give me confidence that we are converging on such an outcome.

I will try to find some time to read through it again in the next few days.

darobin commented 2 years ago

@jbradleychen I am confident that we remain aligned on these goals. It will take time to solve these issues but the conceptual framework we have is conducive to it since it treats information unsafety as a privacy issue. We can also make more constructive progress in TAG repos because @jwrosewell is banned there for exactly the kind of behaviour exhibited here. It's easier to be productive without concern trolling, sealioning, evidence denialism, someone trying to stir up dissent that isn't there, invoking disagreement from parties that aren't there, renaming the issue after the fact to try to present it as controversial, etc.

Regarding the topic at hand, I don't believe that there are known safety benefits from notice and choice?

jdelhommeau commented 2 years ago

@darobin , I agree with you on most points:

anderagakura commented 2 years ago

@darobin Thanks for your reply. Just for my knowledge I will keep reading.

+1 with @jdelhommeau : Opt-out by default is primordial for EMEA unless the default setup fulfils the requirements (e.g no device access, geo...)

@michael-oneill : If the idea is to be product focus first, I understand. But the risk is, doing that we will come to a point that we will need to deploy it in order to test it. Last year, FLoC was deployed to be tested. It was in different areas but not in EMEA because it was not compliant with GDPR restrictions. Thus, we could reproduce the same scenario.

Better to assume an opt-in for now, and try and get an exemption later when this has proved itself in practice.

michael-oneill commented 2 years ago

If the idea is to be product focus first, I understand. But the risk is, doing that we will come to a point that we will need to deploy it in order to test it. Last year, FLoC was deployed to be tested. It was in different areas but not in EMEA because it was not compliant with GDPR restrictions. Thus, we could reproduce the same scenario.

Yes, testing in EMEA will need an opt-in. but the browser that does it could also provide a consent setting (defaulted to off of course i.e. GPC:1 as Brave does)

michael-oneill commented 2 years ago

The consent setting could be a new one that does not even have to be communicated in a header, because the browser itself could just block sending the MPC info unless the user had explicitly agreed.

darobin commented 2 years ago

@jdelhommeau I think we're entirely agreed. The way I look at it, we design tech that is safe by default, and that can also be turned on & off. In some cases we might need to support opt-in, in others opt-out (eg. objection to processing), for legal reasons (whether those legal reasons help with privacy or not). My assumption in GDPR worlds is that even if we can make the case for a CoC that allows this processing under LI, we'll still have to support objections to processing anyway.

@anderagakura Dis-moi si tu trouves des choses intéressantes! Always happy to discuss.

@michael-oneill If consent remains the basis, the browser could obtain consent for its own processing but the processing of others could prove a challenge in some cases (see TCF). This is completely a side-track, but note that if the browser obtains consent for its own processing and sharing, that would imply that the browser is a controller. Following that path, you quickly reach the point at which the browser would need a legal basis for instance to provide non-essential cookies. I've long said that this is a plausible legal theory under the GDPR, but I think that a legal requirement to put an ATT-style dialog in browsers would make some waves :)

michael-oneill commented 2 years ago

@michael-oneill If consent remains the basis, the browser could obtain consent for its own processing but the processing of others could prove a challenge in some cases (see TCF). This is completely a side-track, but note that if the browser obtains consent for its own processing and sharing, that would imply that the browser is a controller. Following that path, you quickly reach the point at which the browser would need a legal basis for instance to provide non-essential cookies. I've long said that this is a plausible legal theory under the GDPR, but I think that a legal requirement to put an ATT-style dialog in browsers would make some waves :)

The browser does not need to ask for consent for it, just not send the reports unless the user has changed the default setting to "on" for the particular MPC domains.

Sites might ask users to change it, but thats why a non-communicated setting is a good idea - sites will never know that a particular user has not opted-in. Sending prompts to everybody all the time will ensure their sites are just avoided.

The setting needs to be domain specific, i.e. the user agrees that a particular set of MPC servers can get reports, so the browser cannot be accused of sending the data to all controllers. It can remind users via suitable UI, maybe a well understood indicator, when they have given consent, what MPC helpers get the reports, identity information of the managing entities etc.

There is laready facility in the ePD for allowing storage use when strictly necessary to fulfil a purpose requested by the user, and the browser can claim GDPR A6.1 (consent) legal basis for processing PD for this purpose.

joshuakoran commented 2 years ago

When the browser is processing people's personal data for B2B processing purposes (e.g., advertising use cases), shouldn't the browser obtain the appropriate notice and consent from consumers to do so, unless operating under a separate legal basis (at least in EEA regions)?

I agree with the above that the output of such B2B processing purposes might NOT need further consent for a recipient of to receive the outout of this process (e.g., aggregate data in an attribution report).

michael-oneill commented 2 years ago

That is what the law in Europe requires, and very unlkely to change. At least in the early stage there will have to be a browser setiing that disables the sending of events by default. There may be a case for a protocol to allow sites to request a prompt, but this would have to be carefully designed to stop misuse, ie. a hard browser limit how often it can occur etc. As the sending of events is up to the browser there fortunately does not have to be a header signal to communicate to servers whether the user has agreed or not.

nlongcn commented 2 years ago

I wonder if the entire debate around consent might be a red-herring. As this debate suggests, the current consent model is broken. Somewhere between the 1st and 100th pop-up, most consumers seem to just slip into a routine of clicking to close, and moving on. And I am not sure any design, language, or interpretation makes much difference. The regulators are coming after the entire consent model, as you well know.

It seems that for the existing ad tech stack to work, you might need the broadest possible consent. How do you get that from consumers when their experience is so poor?

What if, rather than seeking to improve the consent process, you focus more on control? Consumers probably don't mind sharing information, especially with trusted brands, but it is the lack of control that niggles more. I appreciate the relationship between consent and control but surely the latter matters more? And if you get control right should the consent hurdle fall? If you build the right architecture around control then will it not be easy for consumers to reverse any and all consent? I suspect what worries most consumers is not the act of consent but the implications of the unfettered spread, use and abuse of their personal data. Build the right protocols around how their data is used and shared, provide them with transparency and control, and you might find them far more willing to consent.

michael-oneill commented 2 years ago

Repetitive prompts, and an ineffective response to their refusal, is the problem, not the need for consent (which is a human right).

The alternatives are to regulate the popups, to ban the behavioural advertising business model, or locate the consent aspect in the user agent. Both of the first require careful drafting and much better enforcement than what we have seen in the past, though that is improving.

Fortunately the need for the algorithms to be executed in the browser means that it is the natural place to implement consent.

A simple user-defined browser setting, defaulting to off, would do it, pehaps augmented by browser mediated and rare prompts on a per site basis, with browsers competing to provide non annoying UI for them.

darobin commented 2 years ago

We should make sure that we stay on topic (which the deceptive retitling of the issue might not be helping with). The question at hand and that warrants documentation in the forthcoming principles document is: does notice and choice provide appropriate data protection and privacy in such a way that it supports adequate levels of personal autonomy and aligns with the Web's ethical principles?

The answer to this question is a clear and resounding "no." The topic has been studied in great detail and notice and choice fails on all counts.

What does this mean? It means that technologies produced by this group cannot rely on notice and choice in order to claim to be appropriate in terms of privacy. What we deliver needs to intrinsically support privacy and not shirk that responsibility the way that consent-based systems do.

There are a few things that this does not mean:

jwrosewell commented 2 years ago

At least Google, the UK CMA, and 51Degrees agree that solutions must adhere to GDPR. GDPR allows for and requires consent. Browser vendors performing processing and using personalize data under GDPR will need to gain consent to do this. Browser vendors that are required to avoid self-preferencing will also need to ensure others are able to obtain the same consent on an equal basis.

Consent can be problematic due to users understanding of what they are consenting to as @darobin points out. Examples I use include; gaining permission for something alongside a gazillion other things at the point of setting up a shinny new and expensive phone, or presenting so many options people just give up.

But despite all these issues consent is used by every digital service provider to operate their service (including Github for the posting of these comments).

Therefore I’m concerned by @darobin’s following statement.

The answer to this question is a clear and resounding "no."

If the scope of this group to address the long standing issues associated with consent, then perhaps it should be renamed to the “Fixing Consent Community Group” and would likely need to involve a wider group of participants.

If not, then we cannot limit the innovation of the group by placing an unjustified burden on proposers concerning the role of consent. Instead, we need to establish success criteria associated with the presentation, capture, and monitoring, of consent.

I think we agree on the following.

I fear we do not agree that the widest number of entities should be able to participate and innovate, that entities should be free to decide what is and is not necessary, the suitability of the Ethical Web Principles (and other W3C doctrine) in general, alignment to laws, that we are all acting in good faith, and that all entities should have choice concerning the other entities they work with.

@AramZS as chair; how do we constructively identify and address these differences?

darobin commented 2 years ago

James, the best way for you to contribute constructively is to engage with the existing body of work that shows that consent does not, apart in rare cases, provide a sound basis for data protection or privacy.

The purpose of this group is to develop private advertising technology. If consent worked for the kind of data processing required in advertising, we wouldn't actually need this group. The technology to produce prompts inside of browsers actually exists. Since consent doesn't work, we need better solutions. This isn't "concerning," it's just the conclusion that is supported by the evidence — as indicated to you over and over again. Repeating the same discredited argument without novel material is not constructive.

That consent cannot provide a defensible approach for the kind of processing this group is working on does not mean that this becomes the "Fixing Consent Community Group." You know what else doesn't provide a defensible approach for the kind of processing this group is working on? Bourbonnais donkeys. That doesn't mean we need to rename this group to the "Fixing Bourbonnais Donkeys Community Group." We just won't rely on Bourbonnais donkeys in the same way that we won't rely on consent. Claiming that we need to fix your pet preferred solution because it doesn't work isn't constructive. Making strawman arguments isn't constructive.

Having to deal with reality isn't an "unjustified burden." But hey, if you have something actually new and that you've somehow solved consent, then there's literally a couple of decades' worth of backlog in APIs that we can't expose because relying on consent is unsafe. It would be a revolution in Web technology.

BasileLeparmentier commented 2 years ago

Hi,

I won't comment on the subject but I want to make a comment on the tone of this thread, which I have now seen repeated across many occasion. It looks to me very close to outright bullying of James and is personally making me really uncomfortable, and I am quite thick skinned.

I would really appreciate if we could go back to more civil, but still frank way of expressing our disagreements, with respect. When I see the tone here I really don't feel this is possible to have a constructive discussions.

This is a shame. Best, Basile

alextcone commented 2 years ago

@jwrosewell can you clarify if you are raising this Issue as an attempt to build a legal argument that someone involved in standards design is saying that new private ad technology standards do not need controls or transparency? I have not seen a single comment here that indicates any of us think that jurisdictional legal requirements can be magicked away.

@BasileLeparmentier - I appreciate you making the call for civility. I believe it is a two way street. @jwrosewell's initial framing (and subsequent reframing) of the Issue makes me quite uncomfortable given the amount of legal accusations he has openly associated himself to. That atmosphere makes productive contributions and discussions incredibly difficult and psychologically taxing.

jwrosewell commented 2 years ago

@darobin – Could you focus on confirming the four points that I thought we agreed on? I’m well aware that we do not agree on the role of consent. My position is simply that consent is used as the basis for the provision of digital services from publishing, web browsers, GitHub and everything else and that this group needs to include solutions that utilize consent in the most responsible way. This is a reasonable position.

@alexcone – We agree that jurisdictional legal requirements cannot be magicked away. This is perhaps a fifth point of agreement.

@alexcone – As soon as the confusion associated with retitling the issue was raised with me I turned it back to the original question. I apologize for the disruption caused.

Overall laws play a role in the solutions we are debating. We need to be open about that and ensure that they are considered. We will not find optimum solutions only considering the role of a single profession.

darobin commented 2 years ago

@BasileLeparmentier I appreciate your message, but how long should civility be extended to someone who will then use it for nothing other than repeating debunked claims, casting insulting and unfounded aspersions ("I fear we do not agree that the widest number of entities should be able to participate and innovate"), sealioning, renaming the issue to reframe it after the fact, etc.?

If this were an entirely new situation, I would wholeheartedly agree with you Basile. But James's behaviour here fits a pattern that we've seen over and over again. Going back in time, when James loudly demanded that the TAG's security questionnaire be changed, many of us tried to engage on open-minded terms. But over time it surfaced not only that James continuously ignored every argument that was inconvenient to him but in fact had not even made the most cursory attempt at understanding the Web's security model and threats. He eventually managed to anger the chairs, both of whom are some of the kindest and most patient people in the community, and was banned from participation. Frankly, that takes a lot.

I have a huge amount of sympathy for the fact that the Web is a complex beast and that approaching standards is hard. Like many others, over the years I've worked to ensure that the community is more inclusive, more open, and that participating is easier. Anyone who needs support navigating this work can come to me (and to many others — I'm not special in that) and people do. But this requires at least two things: 1) the willingness the listen (even if it's to disagree) and 2) a commitment to engaging with prior art and doing one's homework. James has shown repeatedly that he is interested in neither.

People make mistakes, people learn, people grow. If James does sincerely want to adjust his behaviour, then the door is of course open. One basic adjustments would be accepting the fact that if consent had been shown to work, we'd use it. SMC is fun, but there is no self-respecting technologist who would consider using it in production if they could just use a prompt instead. That's just basic respect for the fact that others aren't dumb and are acting in good faith. In order to reject that elementary starting point, James has to come up with insulting conspiracy theories like "I fear we do not agree that the widest number of entities should be able to participate and innovate".

I wasn't joking when I said that solving consent would be a revolution in Web tech. The link I pointed to in my previous post here is to a group I chartered and chaired that had permissions management — of which consent is a part — as a key component of its scope. Many things were tried in that group, before that group, after that group; all have failed. It's solved for a few specific cases (eg. button-based pickers) but that's it. Maybe it would be respectful to engage with prior art at least some before claiming one has the solution figured out?

At any rate, as Alex says respect and civility are two-way streets. I have much better things to do with my time than to have to push back against disrespectful behaviour and I would be delighted not to have to do this.

darobin commented 2 years ago

@jwrosewell A lot of things are used and yet don't work, consent is one of them. See copious prior work.

I can't imagine that this group will prevent anyone from using consent in the most reasonable ways, when those exist. There is however no known way to share a significant portion of cross-context reading history with a party, at scale, based on consent, and so the reasonable application of consent for this group is to not rely on it. I'm glad that you agree with this.

(Is it annoying when people claim that you agree with them when they know you don't? Maybe you should consider not doing that to others.)

I'm not sure what to make of the handful of potential properties of consent that you list. Is it your expectation that no one has thought of this before? Is there a specific problem in the space, based on your reading of prior art, that you believe you are bringing a novel solution to?

martinthomson commented 2 years ago

This thread really has gone past the point where I think it is adding value.

James asked why notice and consent (or choice; to allow for the ambiguity, I'll just abbreviate to N&C) is not sufficient and we've veered off into discussions of whether it might be necessary sometimes. Those little diversions seem to have been fruitful in terms of reiterating a few things that were worth repeating: namely, whatever we might standardize here won't give anyone a license to break applicable laws, so relying on N&C might be necessary in some cases.

As far as the point of deeper disagreement, I don't see evidence of agreement regarding the sufficiency aspect. That is, I don't see any evidence that N&C might be sufficient basis for privacy protection. Wading through the rhetoric here, I'm only seeing one person who might be asserting that N&C is sufficient.

Thankfully, we're chartered in a way that rules this debate moot:

Features that support advertising but provide privacy by means that are primarily non-technical should be proposed elsewhere.

That is, the charter assumes that this question has been decided. It would take a fairly creative reading of that text to lead someone to conclude that a system that relied on N&C - exclusively or extensively - for its privacy properties was in scope for this group.

BasileLeparmentier commented 2 years ago

Hi @darobin,

You seem to feel justified to be bullying James, and I don't intend to delve if you are justified or not. I am no Judge.

I will just note that the tone of the discussion, which is not new, and this is why I decided to speak up, is not only shocking me, but also other people with whom I discussed it, who are not speaking up.

The consequences is that I am not able to speak my mind in thread where you answer like that, stifling potential disagreement.

I won't comment further on this. Basile

alextcone commented 2 years ago

I see a host of dynamics just under the surface here. I’m doubtful we’ll resolve those dynamics via Issue comments. I’m hopeful 2022 will see an in-person meeting of PATCG so that we can better get to know one another. This group’s charter is just too important to get distracted with anything else.

darobin commented 2 years ago

Dear @BasileLeparmentier,

I am sincerely sorry that you feel this way. I have never wanted to bully anyone, but I believe it is also important to stand up to toxic individuals who constantly harass others and I am saddened to report that I know many who refuse to participate because of how James has harassed them in the past. This issue is not the place to discuss this topic further, but if you wish to reach out offline I would certainly be happy to speak.

For the topic at hand, as @martinthomson notes, this issue has not progressed since I had proposed to close it 20 days ago and no novel argument has been made that would establish consent as sufficient to data protection contrary to the intended charter. We should close and document.

jwrosewell commented 2 years ago

I assume the charter can be rewritten to make the position regarding alignment to GDPR clear. I believe there is benefit in that at least.

Regarding @martinthomson statement.

As far as the point of deeper disagreement, I don't see evidence of agreement regarding the sufficiency aspect. That is, I don't see any evidence that N&C might be sufficient basis for privacy protection. Wading through the rhetoric here, I'm only seeing one person who might be asserting that N&C is sufficient.

@darobin and @BasileLeparmentier have raised concerns about contributions. We can assume they act in good faith and there are people and organisations that have not contributed to this debate. To draw the conclusion Martin does is premature as it does not considered the input of those that do not contribute to public debate. It was for these reasons I raised this now closed issue concerning secret ballot and @jeffjaffe (W3C CEO) raised this issue concerning anonymous Formal Objection. @jeffjaffe has also agreed that where new members join and do not agree with prior consensus there is no longer consensus. Therefore, how do we assess the views of the group so that these fears do not result in a vocal minority steering the group down a path that others do not agree with? This is particularly important concerning a matter that seeks to steer the group down a route that embraces a singular type of solution and as such risks limiting innovation that might otherwise result in a better solution for society and people.

jeffjaffe commented 2 years ago

It was for these reasons I raised this now closed issue concerning secret ballot and @jeffjaffe (W3C CEO) raised this issue concerning anonymous Formal Objection.

My concerns in #497 are quite different from this discussion.

jwrosewell commented 2 years ago

@jeffjaffe I should have clarified there are times when anonymity is desirable and we should be able to handle that. Your specific concerns around the circumstance for anonymity in #497 are different to the concerns in this discussion.

My issue #469 is also relevant, attracted some debate at the time, and is being passed to the AB.

The key point still stands. We can not conclude as @martinthomson suggests.

darobin commented 2 years ago

Kiran asked a good question on the public list that for some reason was not captured here as well. I am answering here to make sure we keep it in a single place. He asked if the issue with consent "is a limitation of browsers which cannot share significant portions of cross-context reading history at scale?"

The short answer is that this isn't a limitation of browsers but a limitation of what people can consent to through the kind of large-scale interaction that exist on the Web and through browsers. But if you don't have the background on this topic, I think that this answer won't be satisfactory. So I thought it would be helpful to provide a short backgrounder on consent so that not everyone has to read all the things just to reach the same conclusion. In the interest of brevity I will stick to the salient points regarding consent that have brought us to the present day; experts on the topic should of course chime in if they feel I've missed an important part.

Informed consent as used in computer systems today (and specifically for data processing) is an idea borrow from (pre-digital) research on human subjects. One particularly important foundation of informed consent is the Belmont Principles, most notably the first principle, Respect for Persons. The idea of respect for persons is that people should be treated in such a way that they will make decisions based on their own set of values, preferences, and beliefs without undue influence or interference that will distort or skew their ability to make decisions. The important thing to note here is that respect for persons is meant to protect people's autonomy in contexts in which their ability to make good decisions can be impaired.

The way that this is operationalised in the context of research on human subject is through informed consent. At some point, someone looked at this and realised that things like profiling, analytics, A/B testing, etc. look a lot like research on human subjects (which is true). And so they decided to just copy and paste informed consent over on computers, with the expectation that it would address problems of autonomy with data.

As often happens when people copy the superficial implementation onto computers but without the underlying structure that makes it work, this fell apart. First, one key component of research on human subjects is the Institutional Review Board (IRB), an independent group that reviews the research for ethical concerns. IRBs aren't perfect, but using an IRB means that in the vast majority of cases unethical treatment is prevented before any subject even gets to consent to it. Some companies do have IRBs (The Times does, as does Facebook for instance) but they can never be as open, independent, and systematic as they are in research. Second, the informed consent step is slow, deliberate, with a vivid depiction of risks. Subjects are often already volunteers. You might get a grad student sitting down with you to explain the pros and cons of participation, or a video equivalent.

What's really important to understand here is that informed consent is not about not using dark patterns and making some description of processing readable; it's about relying on an independent institution of multidisciplinary experts to make sure that the processing is ethical and on top of this independent assessment of the ethics of the intervention taking proactive steps to ensure that subjects understand what they are walking into. There are Web equivalents of informed consent — studies based on Mozilla Rally are a good example of this — but they work by reproducing the full apparatus of informed consent and not just the superficial bits that make the lawyers happy. Rally involves volunteering (installing an extension), gatekeeping to ensure that studies are ethical (eg. the Princeton IRB validated the studies I'm in), volunteering again to join specific studies and being walked through a description before consenting, and then strong technical measures to protect the data (like, it is only decrypted and analysed on devices disconnected from the Internet).

None of this scales to the kind of Web-wide data processing that is required to make our advertising infrastructure work (or to enable many other potentially harmful functions). People have tried, but as shown repeatedly by the research I linked to previously (and more generally all the work on bounded rationality) it doesn't work. What "doesn't work" means is that relying on consent for this kind of data processing means that you end up with a lot of people consenting when in fact they don't want what they are consenting to; they are only doing it because the system is directing them in ways that don't effectively align with the requirements of informed consent. (To give just one example, Hoofnagle et al. have found that 62% of people believe that if a site has a privacy policy that means that the site can't share their data with other parties. Informed consent means eliminating that kind of misunderstanding and then providing a detailed explanation of the risks. It's a steep hill and few people have to time for it.)

One possible reaction upon learning this is to not care. Some people will say "well, it's not my fault that people don't understand how privacy law and data work — if they don't like it, we gave them a 'choice'." But giving people a choice that you already know they will get wrong more often than not isn't ethical and doesn't align with respect for people.

As members of the Web community, however, we don't want to build unethical things. The Web is built atop the same ethical tradition that produced informed consent in research on human subjects: respect for persons. (We formulate it as putting people first, but it's the same idea.) Since we try our best to make decisions based on reality rather than on what is convenient, we can't in good conscience see that consent doesn't work and then decide to use it anyway. There is also a fair bit of evidence that relying on consent favours larger, more established companies which makes consent problematic from a competition standpoint as well. Because of this, it is incumbent upon us to build something better. (In a sense, we have to be the IRB that the Web can't have for every site.)

Is it technically possible to overturn this consensus? Of course. But we have to consider what the burden of proof looks like given the state of knowledge accumulated over the past fifty years that people have been working on this. Finding a lack of consensus requires more than just someone saying "I disagree," it would require establishing that respect for persons is secondary (and reinventing informed consent on non-Belmont principles) or that bounded rationality isn't real or high-powered empirical studies showing that people aren't tricked out of their autonomy or some other very significant scientific upheaval. It might be possible, but we're essentially talking about providing a proof of the Riemann hypothesis using basic arithmetics: I don't believe that it's been shown that you couldn't do that, and there are very regularly people who claim to have done it, but it would be unreasonable to put anything on hold for that in the absence of novel, solid evidence.

I hope this is helpful for people who haven't been wrestling with this topic. What the charter delineates is helpful because it protects this group from walking down blind alleys that have been explored extensively with no solution in sight. If people find this kind of informal background helpful, I would be happy to document it more prominently.