Closed hadleybeeman closed 7 years ago
I appreciate the TAG's consideration of this issue, but this opinion seems to be based on some significant misconceptions as to the EME architecture, both in principle and in practice. I've addressed some of these below and would welcome feedback as to how exactly they arose, so that we can make the specification clearer on these points.
Firstly, regarding the nature of the CDM. Whilst it may be "opaque" to users, it is not opaque to browser implementors. It is not expected that browsers support arbitrary CDMs, as was the case with plugins. Browsers may select specific CDM implementations with which they integrate and are expected to have detailed knowledge of those implementations (for example "User agent implementers must obtain sufficient information from Key System implementers to enable them to properly assess the security implications of integrating with the Key System."). The EME specification places many constraints on the CDM, in particular with respect to privacy and identifiers and requires consent for anything more privacy-invasive than cookies. The intention is certainly that it is not possible for compliant EME/CDM implementations to "be gathering more information about the user than is strictly required to fulfill the intended use case" any more than the rest of the browser could. Do you have any suggestions as to how we could make this clearer ?
There is nothing in the specification that would require that "users are required to execute untrusted third-party code which may not have gone through a proper security/privacy audit process." Browsers are required to seek consent under much less severe circumstances than this and it is fully expected that browsers could provide the ability to disable CDMs. None of the existing implementations involve "untrusted third party code": of the four major browsers, three of them have created their own CDM code - therefore not "third party" - and in the fourth (Firefox) there is both sandboxing and code signing to ensure it is the single supported thing they expect - and they are expected by the requirement mentioned in the first paragraph above to know a lot about what that is and its security / privacy properties. A far cry from the plugin situation.
Regarding your question as to the implications of changing the SHOULD into a MUST in the following: "If a user agent chooses to support a Key System implementation that cannot be sufficiently sandboxed or otherwise secured, the user agent SHOULD ensure that users are fully informed and/or give explicit consent before loading or invoking it". These implications are discussed in Issue #312 and it would be a noop: even with a MUST the requirement involves a judgement and SHOULD anyway means "MUST unless you have good reason to judge otherwise".
Regarding "deliberately unethical or malicious implementations of EME", this concerns betrays a mis-understanding of the architectural change described above. Browsers are expected to be responsible regarding the EME and CDM implementations as they are for any other browser code. (Or, similarly, OS vendors if the CDM is part of the OS). If you are concerned about "deliberately unethical or malicious implementations of EME" then you are concerned about "deliberately unethical or malicious browsers or OSs" or, at a stretch one deliberately unethical party supplying a malicious CDM to a browser vendor that they dupe. But the only example of browser and CDM vendors being distinct is Mozilla using Google's CDM and Google verifiably use the same thing with their browser, so we're back to a concern about an unethical browser vendor. It is unclear on what basis CDM code is distinguished in this respect from the rest of the browser / OS, other than a mis-understanding of the change in the security profile of this solution compared to the previous plugin-based one.
That said, it is clearly necessary make it clear that the common browser practice of celebrating and rewarding independent published security research applies to the CDM component as well as the rest of the browser. There is no evidence to suggest this is not the case, but clearly it is being questioned. It would greatly facilitate that process if the basis for such suspicions - with respect to browsers, specifically - was better explained.
@mwatson2, thank you — again — for your thorough response. I just want to clarify that the TAG does not have consensus for/against EME. We don't mean to be attacking your work or this spec; we appreciate that you and the other editors and contributors have put significant time and expertise into them. We have been consistently focused on the security and privacy implications of the architecture of the web — not just in the context of EME — and it was through the lens of those concerns and our Assuring a Strong and Secure Web Platform resolution that we were looking at this.
We understand that EME is now up for decision, following the end of the PR period. Whatever the choice of the consortium/director, we want to reiterate that we looking forward to working together in the future and helping by weighing in on any specific technical issues.
@hadleybeeman Thanks. I'm sorry if my response was overly defensive.
I do think, however, that there were mis-conceptions in the TAG's feedback and that is potentially a problem with the specification: If even the TAG is concerned about some of the things you mention - which we had thought we had addressed - then perhaps we need to do more to explain those things ?
Thanks again for your messages, Mark. I just want to reiterate that the next steps for EME are in the hands of the Director, now that the PR deadline has passed. It's not that we don't want to engage with you; it's just that it genuinely doesn't seem useful/constructive to talk about these details right now.
We are aware this is a charged topic in the W3C and the broader community. If this conversation needs to continue, we'd prefer to pick it up again after the Director’s decision. This hopefully minimises the chances of us potentially causing a conflict now.
It's not that we don't want to engage with you; it's just that it genuinely doesn't seem useful/constructive to talk about these details right now.
It is useful because at present the TAG comments appear not to understand or ecognize the single most important advantage of EME and of developing EME at W3C. This aspect is important to public understanding of the proposal and so should be cleared up before publication.
Hey Mark,
Thanks again for thoughtfully engaging us here.
As you know, the TAG has been considering issues related to EME for several years. We discussed your reply at today’s F2F meeting in TOK and believe we have a firm grasp of the various parties in play in Browser/CDM distribution and use, both practical and theoretical.
It isn’t the TAG’s role to comment on the W3C as a venue for this work; we review architectural decisions and flag risks and conflicts, working collaboratively with spec designers and editors to understand them more deeply. We appreciate the EME group working with us on several of these issues in the recent past.
It doesn’t appear that there are further technical or architectural questions raised in this thread and so we beg your forgiveness for bowing out at this point. We’d like to continue to offer our services for technical and architectural review of issues that arise. Per usual, feel free to open an issue in our GitHub repo if we can assist.
Regards,
Peter, Co-chair W3C TAG
We the TAG have been discussing this document and its broader context in EME — especially since it seems to be in response to our 2015 call for the protection of security and privacy research on the web. Though we haven’t had time to get formal TAG consensus, we did want to share some of our thoughts with you.
As the work on Encrypted Media Extensions in the W3C progresses forward, the TAG remain concerned regarding some of the user privacy implications of the architecture. Specifically we remain concerned with the imposition of an opaque piece of software, the CDM, as a required piece of the EME architecture – that this piece of software may be gathering more information about the user than is strictly required to fulfill the intended use case and that the user is not able to limit or audit the activities of this piece of software. Two mitigations against this threat which have been proposed are the effective sandboxing of the CDM and ensuring that security and privacy researchers have a free hand (pursuant to industry best practices) to disclose privacy or security vulnerabilities in the CDM without fear of prosecution under legislation such as the DMCA.
Regarding sandboxing, as the CDM is generally a closed-source proprietary component, without an opt-out mechanism users are required to execute untrusted third-party code which may not have gone through a proper security/privacy audit process. To avoid users being exploited by the malicious CDM code intentionally or unintentionally doing anything outside of the intended functionality (in this case, being content decryption) implementations are expected to sandbox the CDM’s execution environment to ensure privacy and security of the users.
This would also mitigate potential security exploits such as arbitrary code execution exploits (e.g. via buffer overflows) which target CDMs. We would be interested in the implications of turning this SHOULD in section 10 of the EME spec ("If a user agent chooses to support a Key System implementation that cannot be sufficiently sandboxed or otherwise secured, the user agent SHOULD ensure that users are fully informed and/or give explicit consent before loading or invoking it") into a MUST.
We also appreciate the initial work on the W3C Security Disclosure Best Practices and find that they do contribute to fostering a web ecosystem that benefits from continual testing. We are pleased to see progress against the situation outlined in our 2015 resolution, Assuring a Strong and Secure Web Platform.
However, we find that this document covers just the use case of inadvertent security vulnerabilities in a web technology. We note that some of the concerns raised during the development of EME centre around the possibility of deliberately unethical or malicious implementations of EME — for example an implementation that might use EME APIs to exfiltrate sensitive data from a user’s operating system. These Best Practices are unlikely to help a security researcher in such a situation. The Best Practices appear to be supporting vulnerabilities that both researchers and the specific implementor would agree need attention; our additional concern is for potential exploits that might not meet this use case.
We want to make clear that, while this effort is a useful contribution to the problem we outlined in our resolution, it is not sufficient to adequately protect security researchers who are helping us build a stronger web. We encourage continued development of these best practices, and want to further encourage W3C policy to continue to find new ways to assure that broad testing and security audit is able to grow on a scale in line with the development in the web.
Sincerely, The W3C TAG