Closed balfanz closed 6 years ago
We should stop calling this a privacy CA. It is a attestation proxy not a privacy CA. Yes it is using a certificate to sign but it returns a basic attestation signed by the proxy.
We also need a way to turn off this functionality so that the actual attestation can be delivered for some use cases. I understand that for general social web this is probably fine, but not for all enterprise use cases.
@ve7jtb said:
We should stop calling this a privacy CA. It is a attestation proxy...
Actually, the TCG now terms it an "attestation CA" (see issue #610), tho one can argue that the latter is a TCG-specific name. IIUC, what @balfanz et al are imagining would not actually impl the TCG-specified attestation CA protocol tho would be similar. Perhaps the "attestation proxy" is a good name in order to avoid confusion with the legacy privacy CA (tcg-specific) name.
I like the term "attestation proxy".
Correct me if I'm wrong here, but I question the premise - isn't the whole point of the spec that the RP trusts the authenticator but not the client? If the RP doesn't care about the attestation signature, there's no guarantee that an authenticator is even involved in the first place. Web Crypto already provides APIs for generating asymmetric keys, so why would an RP use WebAuthn if they don't care whether the credential is actually backed by a trusted authenticator?
@emlun because webauthn, even without attestation (attstn) eases the impl & deployment burden on RPs as compared to concocting their own crypto authn protocol using Web Crypto.
If the platform generates the attestation instead of the authenticator, does this mean that platform has its own authenticator or just generate the attestation signature on behalf of the authenticator?
@Kieun Strictly speaking there's no difference between the two. It would mean the platform is the authenticator, and that the RP and user therefore trust the platform to not impersonate and/or track the user.
I think Google is proposing a cloud based services that the platform would sent the authenticator generated attestation to and get back a new attestation signed by the cloud service that would then be given ton the RP. Effectively making multiple authenticators look like a single logical one from a attestation point of view.
The downside of this is that the service must be highly reliable and contactable by the platform to make credentials.
The flag Google is proposing would let RP opt out of getting a attestation (or get a self signed one from the platform) so that they would have a more reliable user experience.
If we can't talk them out of an attestation proxy then a flag for No-attestaion/blinded-attestation/authenticator-attestation is probably required as well as the appropriate user dialogs so that you could allow the user to opt into sending the authenticator attestation to there bank or other party if required. Otherwise we will lock people into not being able to get into some sites at all with specific browsers.
It is a big change at this stage of the process.
I should also note that the privacy uplift with platform authenticators is probably drowned out by signals from browser fingerprinting. A authenticator vender would have to a really bad job to have a material impact. I think the original concern was the attestation in combination with the counter, but I think we can deal with the counter going forward to make it non correlate-able.
The primary reason for the attestation proxy is to keep social web sites from rejecting some types of authenticators based on bad judgement. The proxy would group authenticators into broad classes and blind the RP to the specifics.
It seems to me that this is a policy / adoption / marketing issue that has become a technical one. I don't know if this is the group that should be making that particular decision?
John B.
Yes, in that case the trust relationship shifts to the RP and user trusting the cloud service instead of the client. I see no issues with that, other than it being a centralized point of failure.
To be clear: I think the concern about unjust discrimination against particular authenticator vendors and/or models is valid, but I'm also concerned about whether the RP and user should have to trust the client (by trusting attestation done by the client, or ignoring attestation altogether).
Just a couple of comments:
Keeping in mind that these are not black-and-white issues, the RP does have to trust the client. If the client is malicious, it can exfiltrate the user's data, operate on the user's behalf, etc.
Using a Privacy CA (aka "attestation proxy") is not a change to webauthn. The spec already calls that out as a possible model. I think that what's happening here is that nobody has seriously looked into actually deploying one. Now that we are, we realize that we need to smooth out some rough edges around that attestation type.
To be fair, what has previously been called a privacy CA strictly deals with non correlation and not attribute blinding.
I consider what is being discussed as more of a blinding proxy with one of the things being blinded being the public key of the original attestation signer.
As you know I have previously made similar arguments around not providing RP's too much information in openID Connect assertions because if the RP starts hard coding information about authenticators it causes things to break when a new authenticator is introduced. That the IdP is in the best position to make the decision and report a LOA (AAL) to the RP based on a trust framework.
This is similar but different. In this case the browser vender is taking away the visibility from the IdP itself and saying we know better. I agree that for many local sites that is probably true, however for some RP knowing that the key was generated in a specific HW device may be required.
The platform trying to provide the right level of abstraction for all use-cases seems unlikely to work.
I have the feeling that the blinding issue is bigger than the privacy one so there is probably no on device fix that would work for you even if it did solve the potential correlation issue. (I will take this more seriously after chrome and others browsers deal with browser fingerprinting)
My concern is if Chrome/Android start using a blinding proxy for attestation it will impact our ability to deal with use cases outside of the social web. In that case it probably would be better to have a RP flag to suppress attestations entirely or say that the original attestation is required and have the browser incorporate the appropriate dialog for permission.
If WebAuthenticaiton is preceded to work differently from different browsers as far as attestations go we are going to be in a bad place for adoption outside of the social web.
John B.
While it would be more work did Google consider using a real Privacy CA model?
The authenticator would need to produce a attestation signed by EK and a CSR signed its fixed Key containing the EK for the platform. The platform would then send the CSR portion to the Privacy CA and get back a certificate for EK (Hence the name CA) and the platform would pass along the Authenticator generated Attestation and the Privacy CA generated cert. That is sort of defined now but just for TPM, but could be generalized as a packed Privacy CA Attestation that the authenticator would generate for the platform.
It is a lot of work but might be a better model if Privacy were the only concern.
The first question would be how would the authenticator know what one to produce, if we were to consider something like that. The nice thing is that you could safely have much smaller batch sizes on K and the Privacy CA learns nothing useful, but could block compromised attestation keys.
Just asking the question.
While it would be more work did Google consider using a real Privacy CA model?
While I'm not sure that I followed your definition of privacy CA, one of the motivations here is to have something that works with U2F devices. We believe that the "attestation proxy" model does so, but your definition appears to require additional behaviour from the token.
I'm also not sure what advantages a real Privacy CA has in the case that the token can be crafted however necessary.
Ok, I agree that there are valid use cases for RPs to not care about attestation.
@balfanz Yes, you're right. I hadn't considered that the client, even without direct access to the private key, can make the authenticator generate user-verified assertions for something completely different than the user intended (assuming the authenticator has no display of its own). I concede that my argument is invalid. :)
[ Note, fyi: we are concurrently using several names for essentially the same thing: Privacy CA, Attestation CA (see issue #610), Attestation Proxy, Blinding proxy, I'm going to use Attestation Proxy here. ]
@balfanz originally wrote in https://github.com/w3c/webauthn/issues/628#issue-264709581:
When/if a client platform uses the Privacy CA model described in the spec [...] Some platforms/clients may opt into using Privacy CAs to address potential issues with poorly-implemented on-Authenticator attestation (e.g., small batch sizes [1]).
And further added in https://github.com/w3c/webauthn/issues/628#issuecomment-336191537:
Using a Privacy CA (aka "attestation proxy") is not a change to webauthn. The spec already calls that out as a possible model.
With regard to the above claims/obervations:
Thus it seems that the claim above that imposing an attestation proxy "is not a change to webauthn" is quite debatable: i.e., our original design and intention was that authenticator attestation is a matter between the authenticator and the Webauthn Relying Party (RP), and if an authenticator chose to utilize the Attestation proxy approach, it would be up to RPs whether to honor that authenticator or not.
@balfanz continues in the original post (OP) by listing some "disadvantages" to RPs and users of the Attestation CA approach, and then stating:
The RP might not wish to be exposed to these drawbacks, and not want to use a Privacy CA. In that case, a platform that normally uses a Privacy CA could instead use "self attestation" (i.e., replace the Authenticator's attestation data with client-generated self-attested attestation data).
Note that the above-described platform-imposed "self-attestation" would "replace the Authenticator's attestation data with client-generated self-attested attestation data."
So, in summary, both options in this proposal "blinds" RPs to authenticator-generated attestation. As @ve7jtb (John Bradley)notes above, this "effectively mak[es] multiple authenticators look like a single logical one from a attestation point of view."
We have concerns with such an approach:
Over in PR #636, @agl argues that "if many RPs all come up with their own policies for which tokens to accept and which to reject, we risk fragmenting the user experience."
It sounds to me like there's risk of fragmenting the ecosystem and user experience with any and all of the options that are being presently proposed.
I think if we make any changes to the status quo at this (late) time, that it needs allow RPs to opt to receive so-called direct attestation (as coined by @agl over in PR https://github.com/w3c/webauthn/pull/636#issuecomment-337111038), and thus oppose this proposal as-written.
=JeffH
[1] it is alleged that testing of various available authenticators has revealed evidence of "small batch sizes", aka "small anonymity set sizes".
TL;DR: When/if a client platform uses the Privacy CA model described in the spec, it would be beneficial for the RP to specify if they want an attestation from the Privacy CA. We should introduce an (optional) parameter in credentials.create() for this.
Details:
Some platforms/clients may opt into using Privacy CAs to address potential issues with poorly-implemented on-Authenticator attestation (e.g., small batch sizes). This does not change the syntax of the attestation, and is transparent to the RP.
What it does mean, however, is the following:
The platform may prompt the user about information being sent to the Privacy CA (since that information potentially reveals aspects about the Authenticator that the user is using)
Obtaining information from the Privacy CA introduces latency.
The Privacy CA learns certain aspects about the login to the RP, such as timing, and Authenticator used.
The RP might not wish to be exposed to these drawbacks, and not want to use a Privacy CA. In that case, a platform that normally uses a Privacy CA could instead use "self attestation" (i.e., replace the Authenticator's attestation data with client-generated self-attested attestation data). It's faster, and has no privacy concerns.
We should give the RP an option to select between these two modes when calling credentials.create(). In one mode, the RP requests a "full" attestation, but acknowledges extra latency, potential user drop-off due to users not wanting to reveal information about their Authenticators to the Privacy CA and/or the RP, and potential leakage of some information to the Privacy CA. In the other mode, the RP signals to the client platform that they're not interested in attestation, and that the client platform should feel free to replace Authenticator attestations with a "self-attested" attestation data. (TODO: or should we just drop the attestation data in this case?)
Proposal:
Let's add an optional parameter
attestation
tocredentials.create
, with possible valuesoptional
andrequired
(default beingoptional
).When the platform sees the
optional
value, it doesn't have to contact the Privacy CA, and can do whatever it sees fit to protect the user's privacy. When the value is set torequired
, then the client should ensure that the Authenticator's attestation is used to create a meaningful attestation for the RP, even if that means extra latency to contact the Privacy CA, obtaining extra consent from the user, etc.