Open kdenhartog opened 2 months ago
Great - finally some important questions that need answers - warning - you wont like the answer which is: It all depends on user choice & the user MUST be in control and the user choices are cached and so the location of the trust boundaries are dynamic and need to be part of the chooser - which must be at least partically device based Here are the players - i will edit this with a solution on github later. The verifier sets the boundaries and gets the access requirements from the resource owner The chooser gets a request from the verifier with the purpose and requirements. The chooser determines if the requirement can be met and if the user must be involved in the choice The device and or chooser may need to perform some proof of presence determination to meet the requirements The browser may be required for UX and access to VITAL Password Managers The Password Managers will supply information that effects what choice needs to be presented to the user If a cred is chosen that accessed by a wallet the choose gets the response from the wallet(s) The wallet may need to access a TEE which might have UX of its own that cannot be anticipated by the chooser The chooser accumulates responses and sends to the verifier. The verifier sees the creds presented and the issuer(s) and determines if they meet the requirements Hopefully in the 99% case the user is not asked to acquire more creds, but it will happen in the 1% case (initially more) It is only the access for more creds that will require an internet connection, otherwise the action is local to the device.
I think this is the only practical model:
User trusts wallet and browser equally to act on it's interests
And, therefore, it's a violation of trust for a wallet or a browser to not act in the user's interests. Users should stop using wallets or browsers that violate their trust.
I would add one extension to @dlongley - if the user chooses to use a TEE for hiding keys then that chosen hardware extension indicates that their is a limit to how far the user trusts the browser and the wallet.
Another extension is for the user of the device to provide biometric proofing - this level of trust can be requested by the verifier of the device Now the device (and/or the wallet) is acting on behalf of the verifier and provides proof to the verifier.
I think this is the only practical model:
User trusts wallet and browser equally to act on it's interests
And, therefore, it's a violation of trust for a wallet or a browser to not act in the user's interests. Users should stop using wallets or browsers that violate their trust.
Ideally yes, however defense in depth suggests that we should try to maintain some properties in spite of the misbehavior of a browser or wallet.
Great - finally some important questions that need answers - warning - you wont like the answer which is: It all depends on user choice & the user MUST be in control and the user choices are cached and so the location of the trust boundaries are dynamic and need to be part of the chooser - which must be at least partically device based
This is an interesting direction to look. As the browser we do put trust in the device (or OS) to work properly. We don't expose every feature of the device to websites though, and each is considered. But this isn't an alien model to browsers.
I think something that's leading to a common point of disagreement here is how the user is represented by these two points of software. Under the traditional definition of "user agent" a browser is acting on behalf of the user in a way that couples trust. Now, we've seen in recent times that this may not always be the case.
For example, there have been malicious builds of browsers I've seen that are attempting to steel user data (such as credit cards and cryptocurrency seed phrases) under the guise of being a well known browser but it's actually a malicious copy. There's other instances where the browser may be gathering data that the user doesn't expect which is not as direct of an example, but does seem pertinent to the privacy model here. We will likely face similar issues with wallets even though they too are meant to be representing the users interests as a "user agent".
So this brings into question how should we establish the internal trust boundaries between the different components that establish the role of the "holder". Here's a few different ways I could see it being represented:
User trusts wallet and browser equally to act on it's interests User trust browser, but not wallet to act on it's interests User trust wallet, but not browser to act on its interests User does not trust browser or wallet to act on its interests
There's also a fifth option that presents some weird edge cases which is: User trusts wallet and browser equally to act on it's interests in isolation, but wallet and browser don't trust each other to work together
Given that these scenarios each can lead to different trust boundaries I think it would be useful to figure out if we have consensus on this or if there's presumed assumptions here that we need to work through first before resolving some of the other issues like https://github.com/WICG/digital-credentials/issues/161.