Closed hadleybeeman closed 2 years ago
(Copying response from the TAG issuer)
The crypto in this scheme is resilient against a bad actor on either side (preventing token forgery from the client and preventing loss of anonymisation from the issuer). The issuer would only be able to subdivide the users of that issuer based on the presence or absence of the token (and in the private metadata case, the value of that bit of information).
There are some issues that can occur if you are running a large number of issuers attempting to be malicious, where each issuer uses the bit of information they have to divide their userbase via different non-trust related metrics. Having a allow/block list that the UA supports would help mitigate this issue.
Thanks for the quick reply, @dvorak42!
How would the user detect that the issuer is not suitably anonymising the tokens they are issuing? Can you explain?
The issuer only ever sees the blinded token and the signed blinded token. The unblinded token and signed unblinded token (that the client then calculates) can't be derived from that information, so when the client later presents the signed unblinded token to the issuer, it wouldn't be able to correlate that redemption of the token with the original issuance.
Without that correlation, there are two issues that could remain that prevents anonymisation:
1) The issuer using a different key to sign the blinded token (and can use a user-specific key to de-anonymize the token). This is solved by the zero-knowledge proof that proves to the client that the signature sent by the issuer was performed by a particular key (and the client verifies that the key being used is the one the issuer has publicly committed to). The VOPRF draft underlying privacy pass goes into further detail regarding the generation of this proof and the security properties of it.
2) Potential client fingerprinting from the issuance and redemption requests. To partially mitigate this, requests at redemption time are done without credentials or other state. There is still the potential for network fingerprints, though while outside the threat model of this API, we can hopefully mitigate it via other projects.
Hello! @hober and I discussed this at our face to face in Cupertino. We are also tracking this in our issue.
Key question: What happens if the issuer is a bad actor?
This design only works in the way you've intended if the issuer is properly anonymising and randomising the tokens. What happens if the issuer isn't a trustworthy organisation?
And since the user has no role in selecting the issuer, the user then gets no say in who that might be. If we end up with an ecosystem of dodgy issuers, can the user protect themselves?
It seems like this could be mitigated by an approach like the one in Web Payments, where the browser keeps a set of payment methods that the user is happy with. The shopping site has a list of payment methods it supports. At purchase time the site supplies its options; the browser picks from those. This is a nice quality: the user agent has a role in choosing. This is the role of the user agent.
We recognise that users can't express their preferences on advertisers at all. Could a similar approach work here?