Open akakou opened 1 year ago
I proposed this PR here, thanks to the advice of privacy community group members.
@dvorak42
I found a similar proposal, #14. Do you think we should discuss it on #14?
Yeah, I think there are similarities that merging the threads/discussion might be good. Though if you're interested in presenting on this particular proposal, we can wait on that to see if the CG thinks there's enough differences in the proposals to keep separate.
OK! (I am interested in presenting it, and) I want to wait for the CG's decision.
Can you open an issue on https://github.com/antifraudcg/meetings/issues/new with how long you think you'll need for a presentation (to present and time for questions). This week's meeting is full but we can try getting you scheduled for one of the next couple meetings.
@dvorak42
OK! I'll make th issue.
Additionally, I have a question: can I present it a little later? (actually, I expect the next April)
I need experience in how the CG meeting does, and to know whether I have an English skill to participate in it.
Yeah, presenting later is fine, we'll just reach out to folks with open agenda items before every couple meetings to see if they're able/interested in presenting at the coming meetings so you can present whenever you feel prepared for it.
I'm sorry that I have not proceeded with this presentation. I realized that my English skill is insufficient to present this proposal recently. So I need more time to learn English to explain this.
Fortunately, my English skill is growing up rapidly. I will be able to propose it this September...could you wait for me?
Doing a presentation in September would work.
@dvorak42 Thank you for waiting.
Could I present this topic in August? This is because I expect I will be busy this September.
My English skill is still not enough yet, but I will try hard to prepare a translator, or I will use an AI translator in the worst case.
By the way, negative reactions to this issue are outstanding.
Hello, @arm64-v9a, @scanlime, @TomasHubelbauer, @Conan-Kudo, and @BurnerWah. Thank you for sharing your honest reactions with me.
Would you tell me why you think it is not good? Your input will help me to improve it for future presentations, if you don't mind.
RFC8890 is a good place to start
@scanlime
Thank you for your link. I have read it first, and it is so interesting!
However I don't understand your intention because of my lack of studying, and I have a question.
Did you mean that this proposal prioritizes the service provider (i.e., Web site) over users so it is not satisfy RFC8890, thus not good?
Yes, at a direct level this is taking control away from users and giving more to device manufacturers and web hosts.
It's also part of a larger project to organize the web around the motivations of large industry players instead of decentralized groups of individuals. The stated anti-fraud goals are not a requirement for making the web a better place, just a requirement for small numbers of entities to run vast empires with a minimum of human connections or oversight.
@akakou More concretely, as a Linux user, I can see this API being abused to essentially lock out my platform.
As a general piece of commentary, the various proposals made by this group are designed with the model that someone is going to be a valued, impartial, and reliable arbiter of access. In reality, this almost never happens. As a user and developer of Free Software platforms, it becomes harder and harder every day because features like EME and WEI are created by well-intentioned people who do not realize what they are creating.
It is often stated that the path to hell is paved with good intentions. Proposals like these fall very much in that camp.
I would urge you to evaluate the problem you're trying to solve with a simple question: "what happens if I have no voice and a need for something?" Providing ways to ensure people can't use something means they're ultimately foreclosed from it. There are very few legitimate reasons to enable that on the web.
@scanlime @Conan-Kudo
Thank you for answering my question.
I think I got it except for some points. In short, you may be afraid centralization of the Internet, and the GM may unfairly revoke users, right?
afraid
This is not a matter of emotion, you're taking concrete steps toward a more centralized internet.
Thank you. Now I understand why negative reactions increased.
However, we cannot simply miss the fraud, even if it conflicts with decentralization, because it relates to the user's right to use the service without broken systems(e.g., game cheating). This is not an issue for only the servicer.
(Besides, many services have already required SMS authentication to users for cyber security reasons such as game cheating, regardless of our proposal. We expect the effect of this proposal to reduce the use of such privacy-unfriendly identification schemes. Thus it also may contribute to user privacy.)
Therefore, conflicts exist between the risk of unfair revocation and the users' right to ensure sound service (and privacy). I think we need to minimize their risks and discuss the balance of their users' rights.
For example, how about an altered version of EPID schemes that the verifier can only revoke users at the range of their service(i.e., GM cannot revoke users at their own decision)? It also risks unfair revocation by the servicer, but at least the centralization risk is rather lower than pure EPID.
For example, how about an altered version of EPID schemes that revocability is ensured to only verifiers (i.e., GM cannot revoke users at their own decision)?
Once a feature is standardized it's a lot less friction to change it incrementally to allow a GM to block users for whatever reason it may be. GM would not be allowed to revoke users at their discretion, but that won't prevent them doing so in the future when the spec may be updated. That's how this has been done in the past, where the feature is introduced neutered, and then quietly expanded on, as the core implementation is already in place.
However, we cannot simply miss the fraud
You'll always have a conflict between what users need and what big corps want in order to keep users "safe".
I hinted at this above, but to elaborate: these are not technologies for keeping users safe in some abstract way, they operate in a world where "safety" is being defined by small numbers of companies with profit motives, instead of the people themselves.
A less centralized internet would also have a much higher ratio of platform admins/mods to general users, and more human connection can solve abuse problems much more thoroughly than any top-down bureaucracy however automated it is.
You can already see this playing out in social media. The largest sites all have huge amounts of automation that requires spying on everyone for "safety", and the smaller decentralized sites have mostly human driven moderation. Large sites have elaborate filters, and the smaller sites have networks of social trust.
This revocation API is fundamentally more political than technical. The question is about the shape of the internet's trust network, and the answer this proposal provides is a step towards a more authoritarian structure that prioritizes only "scale" and money.
A less centralized internet would also have a much higher ratio of platform admins/mods to general users, and more human connection can solve abuse problems much more thoroughly than any top-down bureaucracy however automated it is.
Honestly, I don't understand it. Might it depend on the type of service? (Besides, it seems to assume that operating as decentralized and untrusting 3rd parties are the same for me, but I don't know why.)
How does it work in the game cheating case? Would you explain why an untrust 3rd party game (so revocable) solves abuse problems(i.e., game cheating) more than another un revocable one?
Trust on the Internet could be a social concept instead of a technal one, just like it is offline. Many of the problems that you could solve by invading privacy or restricting people's freedom could also be solved by individuals forming trust networks with each other. If your friends are cheating at games you could find better friends.
(Sorry, I forgot to reply and will respond when I will be free.)
and details of Q&A.
Borbala: Apple device check allows developer 2 bits per device. Developer might set a bit that sticks. How EPID relates to this type of system? Can you tell a little bit about how that works?
Sorry, I do not understand the meaning of "sticks." Then, I temporarily answer it, assuming you ask me the difference.
Apple device check is a similar method, as you know, and their use case is also revoking so same. However, EPID provides more vital privacy than Device Check, such as:
EPID can coexist with DeviceCheck as that can be one of the unique resources, for example.
Borbala: In systems like device check, if the device is marked bad, it is for the specific developer only. If EPID marks device bad, is it for all relying parties or just a specific one?
It depends on our decision; we can implement them in both styles. In the generic EPID style, the GM controls the revocation list, and the revocations are applied to all services, as far as I know. On the other hand, each service can use different revocation lists from each other on seeing just the technical side (but not famous, as far as I know)(the reason shown in Q&A4). Note that the servicer cannot arbitrarily decide without the users' agreement because the signing algorithm needs the same list.
Most importantly, it is a trade-off between security and centralization, so we need to discuss it.
Steven: In system and thread model slide: A single ID is assumed, in web user might have multiple IDs, can just create a new e-mail, etc. Steven: EPID should be still required to be tied to some limited resource?
It is true. To use EPID, we need to discuss which limited resources we will use.
Michael: When each service manages its own revocation list, how to prevent multiple colluding services randomly revoke a key they have not seen and create identifiers for users?
It is an interesting question, and I read the blog Mr. Michael shared. I think a similar attack as HSTS tracking is also possible in the EPID context.
Moreover, there are some attacks in the EPID context regarding the revocation scheme, for example, revoking just one user.
To prevent these attacks, I think it can use the following privacy measures:
Forcing that servicer to put all revocation lists on the distributed storage for audit, similar to certificate transparency. Note that services cannot use a non-auditable and hidden revocation list for verification because the signer needs the same list to sign in EPID, as aforementioned.
First, a browser can show "revoked" status to users to prevent such tracking by showing the "revoked." Then, the user can notice that the service is trying to violate the privacy. Alternatively, the browser can block the service if the user is revoked. It means stopping to access service, similar to securing when the TLS certificate is broken. Fortunately, it is happy for both users and services.
Additionally, I have not read it, but it seems to approach the issues. https://eprint.iacr.org/2020/1498
However, they cannot block all attacks, as with timing correlation attacks in Privacy Pass. Hence, we need to discuss the balance between pros & cons.
There is no pros & cons, what you are purposing is a nightmare, this idea is fundamentally evil & your are extinguishing all freedom and joy there still is on the web.
You should know better, any person talking about TPM attestation like mechanisms is up to no good, any key not accessible to the user is an abuse of the user. You are concentrating control to entities that already have a bad track-record for respecting privacy rights. This entire Community Group should be disbanded, every member shamed and any person in the W3C responsible for this group removed.
No amount of tweaks and changes to this type of scheme will make it acceptable, any client that the implements web specification must be a first class citizen without the blessings of any gate keeper. This is what allowed the internet to grow and become what it is now. An open internet made Linux a near first class desktop operating system, I can work, do banking and use almost any service via a web browser.
Then the first villains strike, the W3C approves DRM into the web specification (EME), now a fully functional opensource web browser is impossible, all our "fears" where proven correct in a few years. If you want more details about what I am talking about, read this, it does a good job of explaining https://boingboing.net/2020/01/08/rip-open-web-platform.html
Now fast forward to now where you want to make the web like Android & IOS where the moment a user actually gains real freedom and starts owning there own device they are no longer able to participate in the ecosystem. Mechanisms like SafetyNet & Play Integrity make running user and privacy respecting roms impossible if you want to be a first class citizen. Not everyone has the option to only run foss apps. people have jobs and lives to live.
Now your going to do the same to the web, any person who dares compile there own Linux kernel or use a fully open source web browser, or run a operating system that is not backed by a large corporation will be screwed.
Explain to me how my librebooted x200t thinkpad running gentoo with zero binaries including drivers will be approved and attested? (It won’t be)
Have you not watched the reaction that happened when news of what you monsters are working on (like the Web Environment Integrity api) spread outside of this bubble? Are blind and deaf or just evil?
First the tech giants embrace the internet and grow, then once they get big they want to make it impossible for competitors to do the same so they start extending the web with garbage like this. Finally the open internet is extinguish and is controlled by a handful of tech giants and hardware manufacturers. If google started today, there indexing spiders would been called scraping and abuse.
While it may not be obvious this is a manifestation of "Embrace, extend, and extinguish". You are the enemy of freedom and open source software. It does not matter if your poison is an open specification.
The road to hell is paved with good intentions
While discourse and debate over the proposals being discussed is valued, please note that folks participating in the community group are expected to abide by the W3C Code of Conduct and treat folks with respect and professionalism.
Sure I’ll keep it more civilized then, sorry writing such a heated comment, I’m just really filled with disgust and disappointment in this idea.
I would still like to talk about and gain an idea of what the plan is for dealing with fully open computer systems. (I think a librebooted PC without any TPM functionality is a good example)
Yeah, that's one of the questions that came up during the presentation to the CG, about what the limited resource/source of scarcity.
I'm hopeful that the EPID technique can be used with other sources of scarcity.
I'm curious if there's a general way Linux developer communities approach this sort of revocation/rate limiting problem with bad actors (spammy accounts posting on community threads/make commits/bug trackers). If primarily through email/account scarcity, maybe there's somewhere there to tie in an EPID revocation scheme as another source?
general way Linux developer communities approach this
Everyone has to deal with trolls, and there's always some combination of approaches depending on what's available to you. In the physical world or the net you'd ask someone to leave, then make it harder for them to get in- not by tracking all humans with a government number but by moving your event somewhere less accessible or interfering with the troll's motivations. Maybe you even just ignore them, and build tools to make it easy for new people to ignore trolls by default.
This is really about Trust, a social concept that humans maintain without interposing technologies.
The problem is that there is no non artificial "sources of scarcity" and by definition its exclusionary. The web is(was) an open system without any scarcity, so anything that adds scarcity is antithetical to it.
The best method I can think of is using some sort of proof of work scheme for rate limiting where the limiting factor is the amount of compute available. But that still creates problems for older devices with weaker CPUs & for devices with limited sources of power. If your going to tie this to emails, then this just starts looking like an absurdly problematic and silly way of implementing email based accounts. This leads me to another comment, I think the solution purposed is an order of magnitude worse then the "problem" its solving, Any platform wishing to have a ban mechanism like this, should just implement accounts & then limit account creation with there preferred method. If users find it creepy then they should boycott the website. In practice the web has functioned well without this type of limitation & the current spam prevention mechanism are plenty good.
I also have to ask, what is your solution to the "analog loophole" & click/phone farms, input devices like touchscreens and usb HID devices can be simulated, what is stopping a bad actor from amassing cheap "trusted" hardware, and simply making sure each device won't go past a rate limit.
The way we deal with it is simple, we don't believe in schemes like this, we think the user should have absolute and total control over the hardware they use. I'll explain how an opensource forum that I use daily deals with the problem, ill be using https://forum.dlang.org as the example.
They use simple techniques that already exist like temporary IP bans and keyword filters. If the system suspects a post is spam it gets placed into a moderation queue. There is zero need (or want) to ban hardware, You do not even need an account to post, accounts instead of abusing users to collect more information on them, are instead purely there for the user convenience! https://forum.dlang.org/help#accounts An email is also not required.
In general, the solution is to create a high quality moderation interfaces to allow moderators to do there job in an effective and quick way.
(I'm sorry for not replying yet. I'll do so when I'm free.)
By the way, this proposal looks too large for me. Sometimes, it confuses us due to so many discussion points. Therefore, it seems good for me to divide the proposal such as:
In actuality, I am not particular about unique resources, so it seems reasonable to discuss in order of items such as simpleness and realizability.
This proposal achieves privacy-friendly web hardware revocation (i.e., hardware ban). In particular, it makes a web servicer(i.e., web server) capable of blocking users who have previously abused them without users' privacy violations.
Background
As is well known, malicious actions on the internet are increasing, and it is a big problem. One of the factors that their prevention makes difficult is the user's anonymity. So servicer can't block users who have abused in the past because the servicer can't track the user.
The easiest way to solve this problem is to track the user. It means servicers require strong identification schemes of users like SMS or credit card authentication (i.e., 3D secure). However, it causes privacy concerns.
Thus, we need a method that blocks users who abuse in the past without tracking. In the mobile context, the DeivceCheck API of iOS satisfies them; they provide a hardware revocation scheme conscious of users' privacy. However, I can't find Web APIs like them. In addition, DeivceCheck API assumes common trusted execution comportment of devices, so many devices can't support it.
Idea
This idea is for Web APIs to provide a hardware revocation method without violating user privacy.
Mainly this idea consists of a cryptographic protocol and hardware registration protocol. The cryptographic protocol achieves revocation without tracking risk, but it assumes that the user doesn't have multiple secret keys. Therefore the hardware registration protocol limit number of distributed secret key to users to support the realization of the assumption.
The cryptographic protocol which this idea used is named anonymous blocklisting protocol. The most popular anonymous blocklisting protocol is EPID(Enhanced Privacy ID). EPID is a signature scheme that ensures user anonymity but revocability. First, EPID realizes strong user privacy. In EPID, there is one public key and multiple private keys. So the verifier can't track users because the same public key is used to verify all signatures. Second, EPID has strong revocability. The servicer (i.e., verifier) can revoke the user(i.e., signer) with the user's signatures which were used for malicious actions. Note that the verifier doesn't need to track or identify users.
Hardware registration protocol is for limiting the number of distributed secret keys to users. It assumes GM(i.e., Third Party for registration), and the user attests their device ID to GM and obtains the EPID secret key. Concretely, such attestation schemes are available, like TPM EK attestation, Android ID Attestation, or iOS DeivceCheck.
References
EPID:
TPM Attestation:
Android ID Attestation
DeivceCheck API