w3c / resource-timing

Resource Timing
https://w3c.github.io/resource-timing/
Other
119 stars 35 forks source link

Add fields for identifying the security protocol, cipher, signature and so on #74

Open phistuck opened 7 years ago

phistuck commented 7 years ago

Add those fields to PerformanceResourceTiming. Protocol - SSL 1.0, SSL 2,0, SSL 3.0, TLS 1.1, TLS 1.2, TLS 1.3... Cipher - AES_128_GCM... Key exchange - ECDHE_RSA... Signature algorithm - SHA256RSA... Signature hash - SHA256... Thumbprint algorithm - SHA1... Public key - RSA... Public key bits - 2048...

igrigorik commented 7 years ago

Hmm... I'd love to hear from the security folks on whether this is something we'd be willing to expose and merits of doing so.

/cc @annevk @mikewest @ericlaw1979 @sleevi

phistuck commented 7 years ago

At the least, it is good for statistics and for potentially phasing out security protocols (and the rest of the details) for a given website. Analytics services can provide those details to make an informed decision faster (server changes at this level are usually more cumbersome to make).

mikewest commented 7 years ago

I know @sleevi has opinions about exposing certificates to extension APIs, as certificates often include personal information (consider workplace MitM devices, or locally-installed antivirus software: the certificates produced by each of these will often contain things like email addresses or license data).

phistuck commented 7 years ago

Well, those fields only expose technical details about the connection and encryption, so nothing identifying.

sleevi commented 7 years ago

I think our experience with internal consumers in Chrome make this an even more undesirable part for exposing to the platform, but David Benjamin can speak more to that. Without wanting to appear overly negative, it might be useful if @phistuck could flesh out more what they'd want to see, as I can see several issues with it as presented, but I don't want to be poking at strawmen.

But on it's face, I don't think it's something we're particularly keen to expose, and it would be great to have an explainer attached to the proposal as well.

bmaurer commented 7 years ago

As noted in #75 using flash today it is possible to read the certificate of a domain for which you can create a socket policy for. There's a lot of value in being able to debug this (eg to find broken MITM proxies)

igrigorik commented 7 years ago

@bmaurer fwiw, I think we're all aggressively converging on deprecating Flash..

sleevi commented 7 years ago

@bmaurer And on that path to deprecating Flash, there is a goal to block that API before then. It just hasn't had resources attached to do it yet. But yes, the fact that Flash represents a significant privacy/security leak doesn't mean we should add more privacy/security leaks :)

bmaurer commented 7 years ago

Absolutely understand the risk here. I would point out that

1) The longevity of this "hole" suggests that, as of yet, there has not been a negative privacy implications 2) There has been a track record of using this mechanism to detect users who are subject to privacy-invading and bug-inducing MITIM proxies.

sleevi commented 7 years ago

@bmaurer It's a logical fallacy to suggest that it's not had negative privacy implications because it's not been fixed. It has significant negative privacy implications, but also significant compatibility issues. However, as @igrigorik points out, that compatibility issue is being fixed - by deprecating support.

Regarding the second point, while yes, it's true you can do things that sites would like to believe are "good," you could also argue that allowing sites to run arbitrary executables would help them improve users security by installing antivirus for users. It, too, is a flawed argument, because there are significant and real downsides to users' security and privacy by exposing these details.

sleevi commented 7 years ago

@bmaurer It's also worth noting that knowing certificates is different than what @phistuck proposed - which is about knowing ciphersuite data. The latter the server already has access to and can provide back to the page, so I think to proceed with that exploration, an explainer is really the best way to move forward. With respect to certificate data, however, that's really a non-starter on privacy grounds.

bmaurer commented 7 years ago

Is there something we could expose to at least give a definitive read on if there is a MITM proxy (or at least one that isn't actively malicious). The ciphersuite wouldn't do this because it could be the same as the origin.

Basically this would be something unique to the SSL session that the server could echo back to the client allowing the client to verify that it had the same session identifier. EG, a very naive identifier could be something like the local tcp port of the client.

This would at least allow sites to debug if a problem is being induced by a MITM proxy and generally monitor the level of MITM interception without the risk of exposing the identity of the user.

sleevi commented 7 years ago

@bmaurer Anything based on TLS EKM can do this (e.g. IETF's Token Binding draft) - TLS-unique channel bindings as well.

phistuck commented 7 years ago

@sleevi (in reply to https://github.com/w3c/resource-timing/issues/74#issuecomment-260548565) - I meant "ciphersuite data" of the used certificate, if that were not clear. I think the server knows about that, too.

phistuck commented 7 years ago

I am not sure what the privacy concerns are here, especially if the server knows all of those already. Nothing I mentioned includes any identifying name or identifying detail - those are just technical numbers and ciphers.

sleevi commented 7 years ago

@phistuck Right, you specifically noted negotiated connection info, and I was suggesting you provide an explicit explainer about why the browser should provide that data, given that it is already available to the server (and thus able to be echoed back in any reply from the server). Then @bmaurer raised the certificate issue, which @igrigorik already tried to head off, because that is very much privacy sensitive to the end user and grants capabilities the server should NOT already have.

For you specifically is the question: Why? What new use cases are only enabled if the browser provides this API? What demonstration of widespread need for this API is there? What cowpaths have already been forged? Because it would certainly increase compatibility risks for TLS changes, which are fairly low right now, and impede on the ability to maintain and experiment, so I think it's useful if you could really flesh out the motivation, not just because it could be there or could be useful.

phistuck commented 7 years ago

@sleevi - sorry for the short answer, but... For statistics and analytics, just like nextHopProtocol.

sleevi commented 7 years ago

nextHopProtocol is already a privacy issue. It seems like this would amplify that.

phistuck commented 7 years ago

@sleevi - then why is it being added to Chrome?

sleevi commented 7 years ago

@phistuck Because @igrigorik? :) Again, I'm not sure what your use case is, as there's only two parties relevant to the discussion - the client and the server. The UA dictates client policy so there is no need for this, and the server can determine statistics and analytics on the server side. Plenty of CDNs do this already, without need for this API, so it does not seem necessary. However, if you can put together an explainer, that would be super helpful in understanding if there is a use case missing. But I don't think we should do anything here, just like I believe nextHopProtocol is an unnecessary feature with privacy issues that I argued against during the intent to implement.

phistuck commented 7 years ago

@sleevi - was it a private argument? The thread does not seem to have any post by you. https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/Hs5-wPpCPZc/NOkvTgqQAQAJ

sleevi commented 7 years ago

@phistuck Looks like it. I'll see about digging up more of the details, but in the meantime, https://github.com/WICG/netinfo/issues/26 is probably an relevant discussion.

In short, I'm very uncomfortable with the idea of exposing information about on-path intermediaries, on the basis that the site should not and does not need to know those details. For example, a site should not be able to determine if a user is behind a proxy - as we know some sites, notably media streaming sites, abuse that knowledge to differentiate content for different users on different networks.