Open msporny opened 2 years ago
This issue was spawned from a discussion in #48.
In #48, @tplooker said:
I still don't seem to see how an e2e flow would work in the case of say issuing a personal identity credential (e.g DL or PRC). To me this flow involves three entities, the wallet, issuer and End-User. Typically unless you are assuming the wallet is handed some token of authorization that identifies who the End-User is and proves the wallet has the authority to request the credential from the issuer, then this has to be done within the protocol flow. I understand that designing of authorization protocols is out of scope for this spec and that its relationship is optional, but how do they relate when they need to?
Say you are using plain OAuth2 or OIDC to do user auth and issue an access_token, does the wallet include this somehow in a request to the interact API? Does that make the VC API a resource server of the IDP? What scopes need to be granted to get this access_token or does it not matter?
Maybe im missing the point, I'm just struggling to see how the protocol flow works e2e for that case?
I still don't seem to see how an e2e flow would work in the case of say issuing a personal identity credential (e.g DL or PRC).
Finally, a real opportunity to use Github's new mermaid-js feature!!! :P
I'm going to use a "Student ID" as the example to avoid any politically charged reactions from others that might engage in this thread. Here's a rough sketch on how you'd issue a Student ID to a Wallet:
sequenceDiagram
participant W as Wallet
participant I as Issuer
autonumber
W->>I: Request Student ID via POST .../exchanges/new-student-id/z123456 (VC-API)
I->>W: DID Auth Request (VC-API + VPR)
W->>I: DID Auth Response (VC-API + VP)
I->>W: Present new Student ID (VC-API + VP)
Now, a careful reader would notice that anyone could intercept .../exchanges/new-student-id/z123456 (this is effectively an Swiss Number capability, that is authz via knowledge), and use it to retrieve a specific Student ID and bind it to a DID that you have control over. So, a naive view would assert that at least two things matter for the security model here:
Attacks against item 1 are software proxies running on the student's wallet. If the student can run software that intercepts and proxies requests (and there is a healthy market for fraudulent IDs in most countries), all forms of software pre-registration and secure communication are defeated.
That leaves item 2 as the only real mitigation, and even that needs to be done on a regular basis (or at least, before a critical use of the ID). Even if a proxy is not being used, the device can be sold and/or re-purposed (new biometric authn registered/changed).
Does the above resonate? Is there a different security model under consideration?
Does the above resonate? Is there a different security model under consideration?
Yes it certainly helps to get to the next level of detail.
The Issuer is able to get that capability-based URL to the Wallet over a secure channel OR they're able to pre-register a software client.
Im not sure this is quite the right framing of the options, the questions I have and apologies for just dropping a list of them I'll attempt to explain my perspective below:
In my opinion, what happens before this flow is that the wallet has to obtain this url (capability) some how. To me that is an authorisation flow. One where the wallet makes a request to get a particular credential, for a particular user (e.g this URL). Obtaining this authorisation usually involves authentication and consent from the user (often the subject of the credential in personal identity usecases). Who this entity the wallet goes to, to obtain this url (capability) also affects these things, im assuming logically the entity that you obtain this URL from is the same entity the wallet is interacting with in the above sequence diagram? Because the fact that the capability is in the form of a URL like this implies that there is some state being managed behind the scenes here in relating the id featured in the URL to some ongoing transaction or session?
IMO the last paragraph above touches on some of the limitations involved in using URL based capabilities rather than say cryptographically secured tokens, the former imposes more session based state management by the issuer and likely a coupling of the entity responsible for generating the capability to the entity where the capability is exercised (e.g the "issuer" in the above diagram).
The questions I have:
I'll try short/quick answers to these and then follow them up with a more detailed diagram that attempts to summarize the answers.
- what does the end to end flow look like, including how this URL (capability) was generated?
Summarized in the diagram below.
- Who generated the URL is their a security domain boundary between them and the "Issuer" or is it the same entity?
The simplest case is that the Issuer generated the URL. A more complex case is that an entity that the Issuer can communicate with, in a SECURE and out of band fashion, generated the URL. Yet another more complex use cases is that a completely different entity generated the URL and placed encrypted data in the URL (e.g., CWT) that only a particular recipient can decrypt. For the purposes of simplicity, let's just focus on the first two, because we haven't found any use cases that need that last one.
- Did the generation of this URL involve user auth, if not how does the interaction id in the URL become related to a specific user?
If by "user" you mean "the subject of the VC", then yes, it did. When the the subject of the VC is through the auth process, they're given the URL over a secure transport (e.g., same device CHAPI). Yes, insecure transports (cross-device QR Code) /could/ be an issue -- and we'll get to that later.
- How long is the URL valid for, can you run the flow multiple times OR is there some replay attack prevention mechanism?
The presumption is the URL is valid until they pickup their VC OR a certain amount of time has elapsed (15 minutes). So, yes, replay attack prevention is expected to be implemented at the Issuer for "high value" credentials.
- Whats the relationship between the system that generated the URL and the system hosting this instance of the VC API that the wallet is now interacting with?
The presumption is that they're in the same trust boundary in the simplest case. There are cases where one can "hop" trust boundaries via the "interact" field in VPR. So you can start at one trust boundary, do part of the flow, and then hop to another trust boundary (using VPR + any other protocol). When you do this you MUST ensure that there is some sort of transferrable trust from one boundary to the other -- like a JWT in a URL or VC in POST data -- both of which can be expressed via VPR "interact".
- Is the user aware at the point that this URL is generated that the wallet can get this credential, put another way when was consent obtained for the end user or does that happen later?
This is an open question. I think your assertion is that consent MUST be acquired before they initiate the VC exchange. In that case, there are multiple ways it can happen, but the most straightforward is: The Issuer can ask the subject "Do you want to store this Student ID in your digital wallet?" -- and then a mediator is invoked or QR Code is displayed.
- When does capability negotiation between the wallet and the issuer occur to ensure the result is actually going to be useable by the wallet? Ideally the earlier the better here.
Yes, this would be the first exchange, where the Issuer would state the features it needs from the wallet (query 1 in the VPR), and the DID Auth request (query 2 in the VPR), and the wallet would receive those and, if it can't fulfill either query, tell the user why. While failing isn't the ideal outcome, if the wallet can't fulfill the request, it can't fulfill the request.
There is an option here where the wallet can register features it has in the mediator (CHAPI)... but remember, different profiles in the wallet might have different capabilities. So, even if a wallet supports CryptoSuiteX, that doesn't mean that any of the active profiles (DIDs) in the wallet does. This is why we do not believe that having a static wallet configuration can solve this, fundamentally dynamic, problem of configuration. It has to be done dynamically, in the protocol.
I'm out of time for tonight, but hope to draw up the full flow diagram, as you requested, detailing the above, 12-ish hours from now.
The simplest case is that the Issuer generated the URL. A more complex case is that an entity that the Issuer can communicate with, in a SECURE and out of band fashion, generated the URL. Yet another more complex use cases is that a completely different entity generated the URL and placed encrypted data in the URL (e.g., CWT) that only a particular recipient can decrypt. For the purposes of simplicity, let's just focus on the first two, because we haven't found any use cases that need that last one.
Ok understood so the most basic cases are when the identifier encoded into the URL is just a random identifier that is associated to some session state on the issuer?
If by "user" you mean "the subject of the VC", then yes, it did. When the the subject of the VC is through the auth process, they're given the URL over a secure transport (e.g., same device CHAPI). Yes, insecure transports (cross-device QR Code) /could/ be an issue -- and we'll get to that later.
Yes i do, that clarifies it.
The presumption is the URL is valid until they pickup their VC OR a certain amount of time has elapsed (15 minutes). So, yes, replay attack prevention is expected to be implemented at the Issuer for "high value" credentials.
Ok.
The presumption is that they're in the same trust boundary in the simplest case. There are cases where one can "hop" trust boundaries via the "interact" field in VPR. So you can start at one trust boundary, do part of the flow, and then hop to another trust boundary (using VPR + any other protocol). When you do this you MUST ensure that there is some sort of transferrable trust from one boundary to the other -- like a JWT in a URL or VC in POST data -- both of which can be expressed via VPR "interact".
Ok details are still fuzzy to me, but I think I understand the intent.
This is an open question. I think your assertion is that consent MUST be acquired before they initiate the VC exchange. In that case, there are multiple ways it can happen, but the most straightforward is: The Issuer can ask the subject "Do you want to store this Student ID in your digital wallet?" -- and then a mediator is invoked or QR Code is displayed.
Doesn't have to be in all cases but I think a protocol that doesn't allow this as a possibility will run into issues with certain credential types.
Yes, this would be the first exchange, where the Issuer would state the features it needs from the wallet (query 1 in the VPR), and the DID Auth request (query 2 in the VPR), and the wallet would receive those and, if it can't fulfill either query, tell the user why. While failing isn't the ideal outcome, if the wallet can't fulfill the request, it can't fulfill the request.
To me this is too late, by then the end user has likely been through an auth journey, probably clicked yes to a bunch of T's & C's, maybe filled in some data and then to find out after all that their wallet cannot support the credential is liable to just annoy users immensely. I think the negotiation of capabilities has to happen much earlier.
This is why we do not believe that having a static wallet configuration can solve this, fundamentally dynamic, problem of configuration. It has to be done dynamically, in the protocol.
I agree the information is liable to change overtime so the model needs to account for that, but assuming that it is so dynamic that the most practical method is to send all information on every issuance flow is extreme.
@tplooker,
To me this is too late, by then the end user has likely been through an auth journey, probably clicked yes to a bunch of T's & C's, maybe filled in some data and then to find out after all that their wallet cannot support the credential is liable to just annoy users immensely. I think the negotiation of capabilities has to happen much earlier.
There are a few things to consider here.
If the user doesn't want the VC more than they want to have to use another wallet, they will abandon the exchange. In this case, we have a situation where the issuer will need to upgrade their services to support well-liked wallets to increase user acceptance. If the user is on the fence -- then they will be more likely to store the VC in a new, issuer-recommended wallet at the end of the exchange vs. if they have made no commitments during a flow. Which of these is actually better for the user depends on the actual value of the VC (vs. its perceived initial value) and what the changes to the wallets and issuers would be if the user rejected it outright. So there are interesting market dynamics questions here. It's important to remember that the best outcomes for users on the whole are not necessarily the result of the most convenient UX in failure modes.
Wallet selection can still be done first with these technologies, where the URL has an offer that isn't bound to any particular user. This results in an exchange request that, e.g., asks for zcaps to write the credential to the wallet and includes over a user-mediated URL for the wallet to direct the user to. The user then follows the issuer flow (user auth/whatever) on the issuer website. At the end of that flow, the issuer uses the zcaps to send the resulting credential to the user's wallet.
If the user does not have a wallet that can work with the issuer but the user really wants the credential anyway, the user can select a new wallet from the recommended list provided by the issuer. The user can be informed that their existing wallet cannot interoperate with the issuer at this time. This situation should only occur when there is an interop failure, where the user's wallet doesn't support some choices made by an issuer of a VC that they really care about. This failing should be surfaced in some way that results in the user using their influence to create a better wallet marketplace (with increased interop) through their choices. As noted above, it's not clear which flow (wallet selection at the beginning or at the end) would result in maximizing user influence.
It's important to remember that the best outcomes for users on the whole are not necessarily the result of the most convenient UX in failure modes
I follow this point, but im struggling to see the conclusion you are drawing here, when is it ever advantageous to delay failure in a system intentionally, knowing full well that the engaging party (the End-User) is having to invest in the process. To me I dont see how this doesn't always end in End-User frustration or them making a decision from a position of compromise (e.g fine i'll store it in the Issuers recommended wallet because I've been through all of this work to get this far).
Wallet selection can still be done first with these technologies, where the URL has an offer that isn't bound to any particular user. This results in an exchange request that, e.g., asks for zcaps to write the credential to the wallet and includes over a user-mediated URL for the wallet to direct the user to. The user then follows the issuer flow (user auth/whatever) on the issuer website. At the end of that flow, the issuer uses the zcaps to send the resulting credential to the user's wallet.
Understand the theory here, but again without seeing how it shakes out end 2 end its hard to fully evaluate
This failing should be surfaced in some way that results in the user using their influence to create a better wallet marketplace (with increased interop) through their choices. As noted above, it's not clear which flow (wallet selection at the beginning or at the end) would result in maximizing user influence.
Again I think I understand your perspective, but unsure how you are reaching your conclusion.
I'm providing ONE example of an end-to-end flow that is end-to-end secure -- there are other end-to-end flows that we can get into later. This does not focus on client feature detection, that's issue #280.This is a same-device flow using a web-based wallet:
sequenceDiagram
participant H as Holder
participant WS as Wallet Service
participant WA as Wallet App (Website)
participant CH as CHAPI
participant IW as Issuer App (Website)
participant IS as Issuer Service
autonumber
H->>IW: Authenticate to pick up Student ID (authn via MFA - (OIDC | email + password) + pin)
IW->>IS: Generate VPR (bind to MFA outcome)
IS->>IW: VPR (containing .../exchanges/new-student-id/z123456)
IW->>CH: Invoke CHAPI with VPR
CH->>WA: Invoke registered Credential Handler
WA->>WS: Instruct Wallet Service to perform VPR flow
note right of IW: Calls below this note are proxied through the Issuer Website to the Issuer Service
WS->>IS: Request Student ID via POST .../exchanges/new-student-id/z123456 (VC-API)
IS->>WS: DID Auth Request (VC-API + VPR)
WS->>IS: DID Auth Response (VC-API + VP)
IS->>WS: Present new Student ID (VC-API + VP)
Here is the same flow that is just as secure using Native CHAPI (Web Share):
sequenceDiagram
participant H as Holder
participant WS as Wallet Service (in Native App)
participant WA as Wallet App (Native)
participant CH as CHAPI
participant IW as Issuer App (Website)
participant IS as Issuer Service
autonumber
H->>IW: Authenticate to pick up Student ID (authn via MFA - (OIDC | email + password) + pin)
IW->>IS: Generate VPR (bind to MFA outcome)
IS->>IW: VPR (containing .../exchanges/new-student-id/z123456)
IW->>CH: Invoke CHAPI with VPR
CH->>WA: Invoke registered Credential Handler
WA->>WS: Instruct Wallet Service to perform VPR flow
note right of IW: Calls below this note are proxied through the Issuer Website to the Issuer Service
WS->>IS: Request Student ID via POST .../exchanges/new-student-id/z123456 (VC-API)
IS->>WS: DID Auth Request (VC-API + VPR)
WS->>IS: DID Auth Response (VC-API + VP)
IS->>WS: Present new Student ID (VC-API + VP)
Do you see anything here that is not end-to-end secure, @tplooker?
@tplooker,
Again I think I understand your perspective, but unsure how you are reaching your conclusion.
What I'm trying to highlight as important is not the difference between non-optimal failure mode UX and a perfect solution (no drawbacks) that addresses it. Obviously the latter is preferable. Rather, the trouble is with the difference between non-optimal failure mode UX and a solution to it that harms user choice more in ways that may be indirect and, therefore, can be overlooked when viewing the failure mode UX in a vacuum.
This is, to me, a significant concern with a solution that requires user wallet or wallet vendor registration with the RP. If it's more indirectly harmful to users, I am suggesting that accepting non-optimal failure mode UX is the better choice (absent any other options).
In other words, we should optimize for user choice across the ecosystem generally, not just in failure mode UX.
I have always felt authorization was out of scope for this API, given that it is meant to be completely generic.
Consider the following scenarios:
(1) does not require "authorization". It's a digital Big Ben. (2) Requires an authorization token bound to a holder activity at the issuer site, and bound to that account holder session. (3) Requires holder software authorization (but only cause I am presuming this is how the issuer will help guarantee success of the process) - like an Api Key or OAuth CC grant - but the inline VPR messages will allow a VPR message to provide a trust factor/account verification link that the issuer can leverage and then the exchange is at an elevated trust level. (4) Is my acknowledging that there are use cases that are about business processes and less about direct 'personal' interaction (I butchered a supply chain use case... sorry)
Since the API is generic, and we don't know what we are protecting, authorization should be out of scope.
Removing the requirement for a specific authorization protocol opens up the scenarios in how this API can be used - BUT it also may lead to interop challenges (how does a client know how and when to call an API endpoint?) and also may be a "foot gun" if we cannot provide proper guidance/best practices on how to use this new tech in a secure manner.
By comparison, the FAPI API was able to carefully define and specify the authorization requirements because it also defined the contents of the API (various levels of financial data, from ABM locator to account transaction history).
Without a concrete API context here, we will continue to spin on authorization requirements, I feel.
I'm providing ONE example of an end-to-end flow (without focusing on wallet configuration) that is end-to-end secure -- there are other end-to-end flows that we can get into later. This is a same-device flow using a web-based wallet:
Thanks this helps
Do you see anything here that is not end-to-end secure, @tplooker?
I think being absolute in feedback for this is quite difficult because it depends on your assumptions. Do I think there are certain use-cases that work with this model? Yes. However I do think there are some significant limitations which would make this approach difficult to address other usecases.
1) The issuer of the capability (URL) has no idea about anything to do with the software client it is intended for, in fact its completely blind to whether that software is on the same device or its going to be piped through an out of band channel to somewhere entirely different. Meaning MITM/hijack style attacks are essentially impossible to mitigate at all. (Note - other protocol options may not be perfect in this regard, but I believe them to be significantly more secure against this style of attack) 2) The possibility of phishing attacks - Say the End User selects the wrong wallet on the CHAPI screen, or for instance they installed a bad app on their phone, the consequence is that app ends up with the credential without any further action from the User. 3) There is no way to enable issuer based consent where the issuer asks the use "do you want to issue a credential to wallet x?", now there are likely plenty of cases where this isn't required but eliminating it as a possible option is very limiting. 4) The negotiation on the wallets compatibility with the issuer happens too late, in these flows the user has been auth'd, accepted T's & C's and clicked issue credential only to potentially find the wallet they want to use is incompatible.
I agree with much of Mikes comments above, however I think the problem I'm seeing is that this API appears to be re-inventing certain aspects of delegated authorization protocols, namely the expression of the authorization capability (URL) and how it is transmitted to the authorized party (CHAPI), rather than looking to use a standard protocol to do this instead (OAuth2). In doing this, the work required for this API's definition gets exponentially more complex.
The benefit of using standard authorization protocols is that you get a consistent approach that has a wealth of possible mechanisms you can layer in to satisfy different security models. For instance if bearer based access tokens aren't secure enough, layer in DPOP. Want to mitigate CSRF, add in PKCE. I dont see the same depth in the possible choices here because of how the flow is modelled and that is likely where the limitations will arise.
What I'm trying to highlight as important is not the difference between non-optimal failure mode UX and a perfect solution (no drawbacks) that addresses it. Obviously the latter is preferable. Rather, the trouble is with the difference between non-optimal failure mode UX and a solution to it that harms user choice more in ways that may be indirect and, therefore, can be overlooked when viewing the failure mode UX in a vacuum.
Right, understand and agree with the position, I just think we may disagree on whether or not there are alternative protocol designs that resolve this UX issue but still preserve user choice.
This is, to me, a significant concern with a solution that requires user wallet or wallet vendor registration with the RP. If it's more indirectly harmful to users, I am suggesting that accepting non-optimal failure mode UX is the better choice (absent any other options).
Again, I think the disconnect here is that I see no difference in a protocol that does "wallet registration" and one where the "wallet is negotiating its capabilities with the issuer via a DID Auth response", they are one in the same in their purpose, mechanically they may do it slightly differently but any risk that manifests around user choice is the same.
@mavarley you didn't answer the question :) -- Do you see anything here, in that ONE specific flow, that is not end-to-end secure? I get all the other points you're raising, and we'll get to them, but we need to start somewhere. Let me try going at it from the other direction:
The same-device end-to-end flow described above is secure for both web-based and native wallets. It does not need OIDC, DPOP, or PKCE to be secure. Full stop.
I believe @tplooker has confirmed that it is end-to-end secure in his response, and would like to discuss other use cases. We can talk about cross-device flows and security models for those after we get confirmation from everyone that the end-to-end flow described above is secure. :)
@tplooker,
I'd like to see if we can work through some of the concerns you raised to whittle down the list:
The issuer of the capability (URL) has no idea about anything to do with the software client it is intended for, in fact its completely blind to whether that software is on the same device or its going to be piped through an out of band channel to somewhere entirely different. Meaning MITM/hijack style attacks are essentially impossible to mitigate at all. (Note - other protocol options may not be perfect in this regard, but I believe them to be significantly more secure against this style of attack)
If CHAPI is trusted to prevent MiTM attacks (in the same way TLS is with OIDC), then this attack should not be possible. That's the limit of the security model and it functions the same way with OIDC. Can we agree to this or is there some other concern?
There is no way to enable issuer based consent where the issuer asks the use "do you want to issue a credential to wallet x?", now there are likely plenty of cases where this isn't required but eliminating it as a possible option is very limiting.
I think the issuer asking "Do you want to view this website using browser X?" before showing their web page would be very limiting in a negative way. I consider the question above to be of the same sort with similarly negative consequences. I consider this to be an information minimization use case -- where the issuer doesn't need to know which wallet I prefer to use and that knowing that information could be used against me, harming my free choices.
I think the roles are being confused here. The issuer is not my agent. The browser is and my wallet of choice is. The browser (or a polyfill for it like CHAPI) should help me make sure I choose the right wallet to use. And my wallet of choice, as my agent, should help me make decisions related to credentials (like whether or not I store them from certain issuers and whether or not I share them with certain verifiers). This is the right model, IMO.
@mavarley you didn't answer the question :) -- Do you see anything here, in that ONE specific flow, that is not end-to-end secure? I get all the other points you're raising, and we'll get to them, but we need to start somewhere. Let me try going at it from the other direction...
Actually, I answered the original question - the title of this issue and the original one in the first message:
This approach has been challenged to be insecure, so this issue is to discuss the attack models and security characteristics of this approach.
Without understanding what is being protected, or the risks, no evaluation can be made as to the "Security".
But I did dodge the latter, more specific question on the proposed flow:
The https://github.com/w3c-ccg/vc-api/issues/279#issuecomment-1085175331 is secure for both web-based and native wallets. It does not need OIDC, DPOP, or PKCE to be secure. Full stop.
So here goes. If we apply the following assumptions:
I believe the above is secure enough to transfer a credential relating to an account holder to a wallet of the account holder's choosing - the exchange ID is acting as an auth token, effectively, until DIDAuth can be completed.
The risk on the binding of the credential to the same subject in the web session is, as @tplooker pointed out; the 'wrong wallet' may get selected on a shared machine, the wallet or DID method may not match the required criteria, there is no confirmation on the web channel as to which holder software the credential was actually delivered to, etc... but depending on the credential's usefulness/value and associated protections, maybe these are not concerns.
And as I noted above, if there is an additional binding that can be performed (like the first message has a PKCE like value that was bound to the original session - or there is an SMS one-time-code that can be entered from the holder software...) then these may bolster the security posture, but these types of nuances depend on the resource being protected.
No one can answer "how much is secure enough" without the full picture - but again, this spec/API is being designed generically - and should leave room for a wide range of use cases and security contexts.
I will note I would have serious reservations about requiring CHAPI being tightly coupled to this API, for the exact same reason.
To be clearer, no I dont think this model is sufficiently end to end secure for several important usecases. I also dont see how you can layer on additional mechanisms to secure it further, because of how the flow is structured. I think the assumption that you generate the capability before knowing anything about the party you are giving it to, is wrong.
The same-device end-to-end flow described above is secure for both web-based and native wallets. It does not need OIDC, DPOP, or PKCE to be secure. Full stop.
As said above I dont believe this is the case for several important usecases.
If CHAPI is trusted to prevent MiTM attacks (in the same way TLS is with OIDC), then this attack should not be possible. That's the limit of the security model and it functions the same way with OIDC. Can we agree to this or is there some other concern?
Its not just CHAPI that needs to be secure against this here to a comparable level with TLS, its everything from the server that generated this URL through to where it ends up. Its also not the only layer of security OIDC or OAuth2 has when interacting with the token endpoint either, you could use client authentication with public private keys or PKCE to further mitigate several attack vectors. You cant do this with the above flow, because there is no interaction with the wallet prior to just handing them the capability (URL). I also dont think you can equate the security provided by a transport layer to a javascript polyfil providing mediation that requires end user interaction, they have completely different characteristics.
I think the issuer asking "Do you want to view this website using browser X?" before showing their web page would be very limiting in a negative way. I consider the question above to be of the same sort with similarly negative consequences. I consider this to be an information minimization use case -- where the issuer doesn't need to know which wallet I prefer to use and that knowing that information could be used against me, harming my free choices.
Sure but thats because your mental model is that a wallet is akin to a web browser, when mine is not.
I think the roles are being confused here. The issuer is not my agent. The browser is and my wallet of choice is. The browser (or a polyfill for it like CHAPI) should help me make sure I choose the right wallet to use. And my wallet of choice, as my agent, should help me make decisions related to credentials (like whether or not I store them from certain issuers and whether or not I share them with certain verifiers). This is the right model, IMO.
I get the issuer is not my agent, but do I not interact with the issuer via an agent, the Browser? And then why is it not a good idea to support a flow that allows via this agent for the issuer to convey to me the implications of issuing a credential to my wallet and obtain my consent? Piping this through the wallet is just vastly more complex and has significant implications around the trust model.
@mavarley wrote:
Actually, I answered the original question - the title of this issue and the original one in the first message
Fair enough. :)
Without understanding what is being protected, or the risks, no evaluation can be made as to the "Security".
Would saying that it's a high value credential, such as a driver's license, help?
If we apply the following assumptions...
- CHAPI is sufficiently secure (against session hijacking/mitm/etc... which I think it is but have not studied closely) and
CHAPI is sufficiently secure as long as 1) the browser isn't compromised, or 2) the issuer site isn't compromised, or 3) authn.io (the CHAPI site) isn't compromised. I'll note that these are the basic assumptions that any 3rd party login flow makes.
- The URL/ exhange-id endpoint is protected from replay (ie after the initial DIDAuth, the endpoint is bound to that DID only and for the rest of the endpoint's lifetime) The exchange-id is acting as an auth token, effectively, and there are Good Reasons to not have auth tokens in URLs (they end up in logs, browser history, etc...) so there needs to be a trust step-up and expiration time...
I'll note that placing the "one-time transaction ID" in the URL is not necessary, VPRs can encode it as POST data in an interact
entry. That does not mean it changes the "random" requirement, but it does address your concern about "it's bad to place random security tokens in URLs because they can be logged to disk".
I believe the above is secure enough to transfer a credential relating to an account holder to a wallet of the account holder's choosing - the exchange ID is acting as an auth token, effectively, until DIDAuth can be completed.
Good, so at least you and I agree that there is a mechanism that utilizes CHAPI + VPR + VC-API to accomplish and end-to-end secure flow.
The risk on the binding of the credential to the same subject in the web session is, as @tplooker pointed out; the 'wrong wallet' may get selected on a shared machine
This presumes that shared wallets don't have any authentication mechanism, PIN lock, or that the individual on the shared computer has not hit the session timeout for their wallet. I hope that we can all agree that this is a corner case that we don't need to design for (or if you do, please say so).
the wallet or DID method may not match the required criteria
That is addressed in issue #280.
there is no confirmation on the web channel as to which holder software the credential was actually delivered to
That is technically possible (CHAPI does provide an acknowledgement that storage succeeded, and VC-API can request the name, version, features of Holder software at any point during a VC-API exchange), but as we've discussed on the CCG mailing list -- it is 1) none of the Issuers business who a person chooses to use for their wallet provider as long as the wallet provides the security features required by the Issuer, and 2) an unnecessary vector for centralization.
And as I noted above, if there is an additional binding that can be performed (like the first message has a PKCE like value that was bound to the original session - or there is an SMS one-time-code that can be entered from the holder software...) then these may bolster the security posture, but these types of nuances depend on the resource being protected.
You can always do additional bindings before the exchange above to 1) pre-register a DID with the system doing authn, 2) provide a PKCE-like value, 3) send an SMS code, 4) do OTP validation, or 5) in-person verification during DIDAuth. We can explore what those flows look like, but all they do is make a secure end-to-end flow more secure.
No one can answer "how much is secure enough" without the full picture - but again, this spec/API is being designed generically - and should leave room for a wide range of use cases and security contexts.
Yes, and what I'm trying to find out is if we can at least start w/ a security proof for an end-to-end flow that we all agree is secure enough for a high value use case like the receipt of a driver's license from an Issuer.
I will note I would have serious reservations about requiring CHAPI being tightly coupled to this API, for the exact same reason.
CHAPI is not tightly coupled to this API, nor should it ever be. Remember, it is possible to do this flow using purely CHAPI + VPR. In addition, you can run steps 7-10 over any end-to-end secure protocol (TLS, DIDCommv2, etc.).
@tplooker wrote:
I think being absolute in feedback for this is quite difficult because it depends on your assumptions. Do I think there are certain use-cases that work with this model? Yes.
Ok, good, then both @mavarley and @tplooker agree that the flow described above is secure for both web-based and native wallets.
No manu, I'm saying something different and I believe @mavarley is too, but thats for him to confirm :). What Im saying is proving security or a system is secure, requires a set of assumptions and a threat model. What I responded with is that this protocol may be secure in certain usecases, what I also said is that it is likely insecure in others. I also pointed out that I believe architecturally the way the flow is designed is limited in what additional security mechanisms may be added to improve the security for certain usecases.
CHAPI is sufficiently secure as long as 1) the browser isn't compromised, or 2) the issuer site isn't compromised, or 3) authn.io (the CHAPI site) isn't compromised. I'll note that these are the basic assumptions that any 3rd party login flow makes.
These are not the same assumptions made during an ordinary federated login, there are usually two primary domains involved in a federated identity flow, the RPs and the IDPs, CHAPI has three (because of authn.io)
Put another way 3) does not exist in federated identity, hence the assumptions are not the same
I'll note that placing the "one-time transaction ID" in the URL is not necessary, VPRs can encode it as POST data in an interact entry. That does not mean it changes the "random" requirement, but it does address your concern about "it's bad to place random security tokens in URLs because they can be logged to disk"
Ok it feels like the expression of the capability is changing here, I was under the impression that it was URL based, are you now saying the capability is something that appears in the body of an HTTP request? I second mikes concern about URL based capabilities in general
Yes, and what I'm trying to find out is if we can at least start w/ a security proof for an end-to-end flow that we all agree is secure enough for a high value use case like the receipt of a driver's license from an Issuer.
Just to be clear for usecases like this I think the security model is insufficient.
Now that we've established that the flows described above are secure for both web-based and native wallets...
Ok, so, looks like we'll have to go back an analyze the first flow diagram with more specifics. :P Rather than have me guess what those are, @tplooker and @mavarley -- please provide the assumptions necessary to detail a threat model that allows us to analyze these flows. I was suggesting it was for a Student ID. We could ratchet that up for a Driver's License? At what point is that flow not secure?
The rest of this was written while @tplooker was responding, so take it for whatever it's worth:
Let's look at a use case where you cannot depend on the interaction URL in a VPR to carry an authz token. Namely, the cross-device, QR Code-based invocation of a native wallet:
sequenceDiagram
participant H as Holder
participant WS as Wallet Service (in Native App)
participant WA as Wallet App (Native)
participant CH as CHAPI
participant IW as Issuer App (Website)
participant IS as Issuer Service
autonumber
H->>IW: Authenticate to pick up Student ID (authn via MFA - (OIDC | email + password) + pin)
IW->>IS: Generate VPR (bind to MFA outcome)
IS->>IW: VPR (containing .../exchanges/new-student-id/z123456)
note over WS, IW: This next step is insecure and susceptible to a MitM attack
IW->>WS: Show VPR as QR Code
note right of IW: Calls below this note are proxied through the Issuer Website to the Issuer Service
WS->>IS: Request Student ID via POST .../exchanges/new-student-id/z123456 (VC-API)
IS->>WS: DID Auth Request (VC-API + VPR)
WS->>IS: DID Auth Response (VC-API + VP)
IS->>WS: Present new Student ID (VC-API + VP)
@tplooker wrote:
To be clearer, no I dont think this model is sufficiently end to end secure for several important usecases.
Is this one of the use cases you were alluding to, @tplooker?
I also dont see how you can layer on additional mechanisms to secure it further, because of how the flow is structured. I think the assumption that you generate the capability before knowing anything about the party you are giving it to, is wrong.
In this particular scenario above, it is "wrong" presuming that you want some sort of tight binding of device to entity authn'ing AND you have to hop over an insecure channel. When invoked from a native app, OIDC has to do this a lot (while CHAPI does not), and both you and @mavarley have suggested DPOP and PKCE as a solution to this. I think you've also suggested pre-registration of a client as a solution to this (though, I don't know if you have something other than DPOP in mind here). Could you please elaborate on all the solutions that you feel mitigate this attack, or just focus on the ones you think we should implement ecosystem-wide.
The same-device end-to-end flow described above is secure for both web-based and native wallets. It does not need OIDC, DPOP, or PKCE to be secure. Full stop.
As said above I dont believe this is the case for several important usecases.
Can you list the important use cases so we can analyze them, please?
You could use client authentication with public private keys or PKCE to further mitigate several attack vectors. You cant do this with the above flow, because there is no interaction with the wallet prior to just handing them the capability (URL).
There could be, but for us to explore those things, we need some threat models to analyze.
I also dont think you can equate the security provided by a transport layer to a javascript polyfil providing mediation that requires end user interaction, they have completely different characteristics.
I don't understand this statement, could you please elaborate?
@tplooker,
Its not just CHAPI that needs to be secure against this here to a comparable level with TLS, its everything from the server that generated this URL through to where it ends up.
I thought there was a general understanding that the server was already assumed secure -- as it needs to be in any flow / protocol under consideration, right?
So, given that assumption, can we agree that if CHAPI is secure against MiTM attacks, then it can safely transfer the capability URL to the wallet selected by the user? Another way of looking at this is: presume CHAPI is built into the browser. It is easier to secure now (than a polyfill), but that doesn't change the fact that if we assume it is secure then we don't have to worry about the attack you mentioned that I was originally responding to, right?
To be clear, I'm not talking about the degree of difficulty in ensuring CHAPI is made secure, rather I'm trying to break the problem down into simpler parts that we can reason about. So, assuming it's secure (and the server is secure, and only secure channels are used to communicate over the network), are we in agreement that the mentioned attack is not possible?
Its also not the only layer of security OIDC or OAuth2 has when interacting with the token endpoint either, you could use client authentication with public private keys or PKCE to further mitigate several attack vectors.
You should state what these attack vectors are and we should walk through them -- because they may have already been mitigated in other ways via differences in how CHAPI functions.
Now that we've established that the https://github.com/w3c-ccg/vc-api/issues/279#issuecomment-1085175331 are secure for both web-based and native wallets... let's look at a use case where you cannot depend on the interaction URL in a VPR to carry an authz token. Namely, the cross-device, QR Code-based invocation of a native wallet:
:) not trying to be pedantic here but Im not sure we have established this
Is this one of the use cases you were alluding to, @tplooker?
Yes
In this particular scenario above, it is "wrong" presuming that you want some sort of tight binding of device to entity authn'ing AND you have to hop over an insecure channel.
Can you elaborate on why this is "wrong"?
When invoked from a native app, OIDC has to do this a lot (while CHAPI does not), and both you and @mavarley have suggested DPOP and PKCE as a solution to this.
To be clear I think the fact that CHAPI chooses/cannot do this is why its security model is insufficient in certain scenarios.
In general I think the biggest problem with this CHAPI flow is that there is no way to tie the end user authentication event and or established session with the issuer to the issued credential, that alone becomes the source of several possible attacks both on the issuer and the wallet. Really when an issuer pipes that URL out over CHAPI it could go anywhere, the issuer has absolutely no idea. That model is fundamentally different to how a delegated authorization protocol like OAuth2 works.
@tplooker wrote:
What Im saying is proving security or a system is secure, requires a set of assumptions and a threat model.
Agreed.
What I responded with is that this protocol may be secure in certain usecases, what I also said is that it is likely insecure in others.
In which cases is it insecure? Define a set of realistic assumptions and a threat model so we can analyze it, please.
I also pointed out that I believe architecturally the way the flow is designed is limited in what additional security mechanisms may be added to improve the security for certain usecases.
Sure, but we need to unpack that. At this point, it's just an assertion by you. You need to prove that the design does not allow for additional security features (that are required for specific use cases). I can start listing ways that it does, but without knowing what attack you're specifically talking about, I'll just be playing "go fetch a rock". :)
Put another way 3) does not exist in federated identity, hence the assumptions are not the same
Yes, but 3 (CHAPI) exists for a reason -- to provide user choice and establish an open wallet ecosystem. IOW, -- it's just another secure system in a set of secure systems. At present, there is no other solution being proposed that has the same reach as CHAPI for same-device and web or native wallet, right?
Ok it feels like the expression of the capability is changing here, I was under the impression that it was URL based, are you now saying the capability is something that appears in the body of an HTTP request? I second mikes concern about URL based capabilities in general.
The interact
mechanism in VPR presumes a POST to kick off an exchange flow: https://w3c-ccg.github.io/vc-api/#example-step-1-request-to-issuer-initiate-degree-refresh-exchange ... POST data can be included in the interact
entry in a VPR. We haven't written this down yet as there was quite a bit of gnashing of teeth around the concept of POST'ing arbitrary JSON data when initiating an exchange in the VC-API work item group. That said, we do use the mechanism in some of our customer/pilot (going to production) programs.
Just to be clear for usecases like this I think the security model is insufficient.
By that, do you mean "Driver's Licenses"? If so, please detail the assumptions and threat model you're thinking about... there are too many variables for me to guess accurately.
I thought there was a general understanding that the server was already assumed secure -- as it needs to be in any flow / protocol under consideration, right?
Yes but there is a massive difference between a client (wallet) making a direct request to a server secured over TLS to receive its capability (access_token in OAuth2), vs a wallet receiving a capability (URL) via the issuers website through a browser, through a 3rd party polyfil. In the latter the security surface is much vaster than just TLS for that hop in the flow, I dont see how that is un-clear?
@tplooker,
In general I think the biggest problem with this CHAPI flow is that there is no way to tie the end user authentication event and or established session with the issuer to the issued credential, that alone becomes the source of several possible attacks both on the issuer and the wallet.
Can you detail a specific case / flow with a clear attacker here? Then we can look at what happens with OIDC w/appropriate mitigations and what happens in the CHAPI case. It's not clear to me whether the attacker you're concerned about is colluding with the user or what the specific threat is that you're concerned about. If there's more than one threat -- let's just start with the simplest one and walk through it ... and then we can go through and compare the other cases with greater complexity.
@tplooker,
Yes but there is a massive difference between a client (wallet) making a direct request to a server secured over TLS to receive its capability (access_token in OAuth2), vs a wallet receiving a capability (URL) via the issuers website through a browser, through a 3rd party polyfil. In the latter the security surface is much vaster than just TLS for that hop in the flow, I dont see how that is un-clear?
That's not unclear. I agree that the two situations are not identical. But we're talking past each other.
I keep trying to say which components we have to assume are secure in order to have a secure model. You keep saying "making component X secure vs. making component Y secure is not the same". These are orthogonal. I think we need to start top down and just say what we must assume is secure in order to have a secure model -- otherwise we can't easily get anywhere ... we just keep getting lost in the details.
Once we have agreement on what we must assume is secure in order to have a secure flow, then we can talk about the degree of difficulty in meeting those assumptions, i.e., we can dive deeper into each component that must be secure and what must be done to guarantee that. Does that make sense?
If so, can we agree that if CHAPI is secure -- the attack you mentioned is not possible? Again, if the answer to that is "yes", it makes no statement on the relative difficulty of "securing CHAPI".
@tplooker,
In case I'm still not being clear, when I say "If CHAPI is trusted to prevent MiTM attacks (in the same way TLS is with OIDC)" -- "in the same way" this is not a comparison of how CHAPI is made secure vs. how TLS is made secure. This is just talking about the hypothetical single attribute of "secure=true", regardless of how it happens (or if it's even feasible).
I could have just as easily not said "CHAPI" and have said "Suppose we have a secure channel X that transfers the capability (URL) to the user's wallet of choice". Here, "X must be secure" is a requirement for security in the system in the same way that "TLS must be secure" is. These things are identified as components that MUST be secure for the whole system to be considered secure.
If we can get agreement on that, then we can move on to talk about use cases that will work based on those presumptions and/or we can zoom in on CHAPI and ask "Ok, but within the CHAPI component, what do we need to do to make it secure?" And, yes, those things will necessarily be different from "TLS" (or "ODIC" or "PKCE", etc). I should note that I expect some of those differences are important in understanding differences with choices made with OIDC and so on -- to mitigate various threats.
Ok, good, then both @mavarley and @tplooker agree that the https://github.com/w3c-ccg/vc-api/issues/279#issuecomment-1085175331 is secure for both web-based and native wallets.
@msporny - this is a broad statement and I'd like to qualify it; and also need to catch up on the continuing thread so please bear with me if this has been covered a bit above.
I believe the method described is secure given certain certain assumptions and conditions are met (and I will elaborate) but there is only one "Secure" system in my mind, and that is an un-plugged machine, smashed with a hammer, buried 6ft under in an unmarked grave...
But saying "nothing is ever secure!" is not productive to what we are trying to achieve here, so I might try to describe the described system meets a certain level of security, or assurance, etc...
I think the system described is as secure as, for example, card-not-present payments on the internet today. Enough information is shared 'digitally' to provide a level of confidence to the receiver that the individual entering the card information has the card in their hand, and is the card holder (by entering the billing address, example).
Are card-not-present payment transactions the most secure form of online payment? no... Are they used successfully in billions of dollars in transactions a day? yes!
Good enough for a digital driver's license or PRC card? In a 'card-holder not present Identity' transaction, I see a lot of use cases being satisfied, or at least bootstrapped.
I will catch up on the thread and be more precise on the additional controls that need consideration, as well how this protocol may be extended to up the assurance level, but needed to clarify what was said about "secure". Thanks.
@mavarley wrote:
Good enough for a digital driver's license or PRC card? In a 'card-holder not present Identity' transaction, I see a lot of use cases being satisfied, or at least bootstrapped.
Yes, thank you for putting it so succinctly and for the analogy. I think that's exactly what we're going for here. We're not going for "perfectly secure", just "secure enough for the transaction at hand". The transaction at hand being an Issuer issuing a digital identity document, such as a Student ID or Driver's License, and transferring it into a digital wallet for the purposes of using it for remote scenarios "card-holder not present for identity transaction" OR in-person scenarios "card-holder present with a identity document photo check on a digitally signed image" scenario. The latter requires less security than the former because it's fundamentally a bearer-style interaction with a biometric validation.
In case I'm still not being clear, when I say "If CHAPI is trusted to prevent MiTM attacks (in the same way TLS is with OIDC)" -- "in the same way" this is not a comparison of how CHAPI is made secure vs. how TLS is made secure. This is just talking about the hypothetical single attribute of "secure=true", regardless of how it happens (or if it's even feasible).
Right yes now I understand the distinction you are trying to make and I agree.
For clarity Im going to offer the sequence diagram for OIDC CP / OIDC4VI as a comparison so we are on the same page.
In terms of things that must be trusted during the capability delivery phase of your flow via CHAPI I think this illustrates which components must be trusted? (Note - not how hard it is to trust each)
When it comes to the OIDC flow its very difficult to compare exactly, but if you equated this phase in your flow, with receiving the authorization code in OIDC, which I believe to be the most accurate, then the only things that need to be trusted here are the following.
Also note if you are using additional binding layers to the client I dont think you even need to trust TLS.
In essence I find the security model for the CHAPI flow you have outlined above to be vastly more complex because you are delivering the capability via the browser and to a location you know nothing about ahead of time. This flow also means you have no way to be able to bind the capability to the client. For instance in OIDC to prevent CSRF and authorization code injection attacks you can use PKCE or a variety of other methods. This isn't a possibility with the flow you outlined, because the first interaction between the issuer and the wallet is handing over the capability.
Good enough for a digital driver's license or PRC card? In a 'card-holder not present Identity' transaction, I see a lot of use cases being satisfied, or at least bootstrapped.
I like this analogy too, it makes sense. However, we need a protocol that we are satisfied has a model that can scale into 'card-holder present identity' transactions right? And how can we do that if the flow (CHAPI) has no way to bind to user authentication flow?
I could have just as easily not said "CHAPI" and have said "Suppose we have a secure channel X that transfers the capability (URL) to the user's wallet of choice". Here, "X must be secure" is a requirement for security in the system in the same way that "TLS must be secure" is. These things are identified as components that MUST be secure for the whole system to be considered secure.
Thanks @dlongley - here is where I wanted to elaborate on the assumptions made in the above diagrams and discussions.
If we skip directly to step 7 in the first diagram in https://github.com/w3c-ccg/vc-api/issues/279#issuecomment-1085175331, and assume "secure process X" gets the URL to the holder, then we can have confidence in the intended holder receiving their credential.
I think we can all agree that for a complete security assessment, the entire context needs to be considered, by the parties involved (not just the API spec designers).
Where I see challenges with the above described model is there are a lot of hand-waved security add-ons to help strengthen the security posture - whereas other protocols like OAuth and derivatives have already developed, scrutinized and reviewed security controls for a range of use cases, based on years of analysis and attack mitigation.
How does VPR handled PKCE-like bindings? DPoP like bindings? how does VPR handle out-of-band session binding securely? What other methods are available in a VPR flow to build up the security level? The interact
endpoint is wide open right now and leaves more questions than guidance; but at the same time enables innovation.
OAuth was not built in a day - so if we are willing to spec the VC API in a way that clearly states the level of security as above (The exchange-id endpoint is vulnerable to interception until DIDAuth is performed, and developers must take steps to secure that session-context transfer) then I think we have a good starting point.
I will note that the position I have stated above puts authorization out of scope for VC API - which means for any interop ecosystem, further profiles will need to be defined (CHAPI, OAuth Cred grant, mTLS, OIDC, GNAP, wizbang-fizzlepop...)
Concerns with CHAPI were enumerated in https://github.com/w3c-ccg/vc-api/issues/279#issuecomment-1086338867 - and come into play in the above diagrams but I see them as out-of-scope for VC API, so maybe they can be discussed on the CHAPI project (https://github.com/digitalbazaar/credential-handler-polyfill)
@tplooker,
Thank you for the OIDC flow charts! There's a ton to talk about here. For example the limitations that the ODIC flows have above that either block out important use cases or create bad UX in ways that the CHAPI flows do not, i.e., lots of similar concerns to what you've raised for CHAPI, but in reverse. BUT -- I don't want us to lose focus, I just want us to remember to come back to those things later; that's the only reason for mentioning them here. Instead of getting into those now, I'm going to try and take it slow in my responses though so we don't go jumping from topic to topic and focus on security concerns around CHAPI / CHAPI-related flows.
So, you mentioned this:
For instance in OIDC to prevent CSRF and authorization code injection attacks you can use PKCE or a variety of other methods.
What are the CSRF concerns you have around the CHAPI + VC-API flows considering that they do not do POST
(or any other state changes) with application/x-www-form-urlencoded
? OIDC may use this method, but CHAPI + VC-API does not. So, can you elaborate here? I want to make sure that when you highlight certain attacks that have been mitigated by OIDC approaches -- that those are actually attacks that are even possible with CHAPI + VC-API (and, if not, that they don't need considering). It's important to note that if certain classes of attacks are no longer even possible with CHAPI + VC-API, that is also a way of measuring reduced security complexity.
I'd like to go through your comments and make sure we address or talk through all of your concerns one at a time so we're thorough here.
@mavarley,
How does VPR handled PKCE-like bindings? DPoP like bindings? how does VPR handle out-of-band session binding securely?
Before jumping to solutions (e.g., PKCE) to threats, we should talk about whether or not those threats exist when taking different approaches. To my understanding, the whole reason PKCE was invented was to solve an issue where oauth client secrets were published in public client applications, making it easy for attackers to get their hands on them. Then, when a user starts using an app to get an access token to do something, they need to make sure an attacker doesn't get that access token. IMO, the reason this solution needed to exist was because of an old protocol (designed for different circumstances) coming into contact with new technology (mobile apps). We should not assume that new protocols have the same problems -- we should be more deliberate.
If the security of a system depends on the security and authentication of the app itself -- and the method for securing the secret (the client secret) is not actually secure (publishing the client secret in a public app), it's true you have a threat that needs mitigating. Can you point to where this threat exists in the CHAPI + VC-API flow we've been discussing?
It's also true that you can run into problems when the channel you're using to transmit information is not secure or may go through a confused deputy (like a mobile OS) that sends the information to the wrong app (i.e., the user does not get to choose, the deputy chooses).
Any number of the threats that are based on "trust in the IdP / client" and ACL models may also be no longer applicable with a new design that is based on trusting math and object capability-like authorization.
Where I see challenges with the above described model is there are a lot of hand-waved security add-ons to help strengthen the security posture - whereas other protocols like OAuth and derivatives have already developed, scrutinized and reviewed security controls for a range of use cases, based on years of analysis and attack mitigation.
It's true that OAuth has been around a long time and there's been a lot of hammering done on it -- and that is certainly of value. However, the fact that its age and wide deployment are incentives for people to keep using it even when it's a bad choice is why some threats exist in the first place. As is true with any protocol, design choices were made without the knowledge or primitives of the future. When people needed a solution in new circumstances it wasn't made for -- they used it anyway.
I think we need to be careful to avoid a sunk cost fallacy here, that we need to embrace something old because of all of the effort put into it, without being clear eyed about why those efforts had to be made. If some of those efforts were only made to ensure that OAuth keeps working when people bolt things onto it that were not anticipated (or fundamentally change its security model!) -- that should change our calculus when considering new design choices and available primitives that avoid the threats entirely, right?
What are the CSRF concerns you have around the CHAPI + VC-API flows considering that they do not do POST (or any other state changes) with application/x-www-form-urlencoded? OIDC may use this method, but CHAPI + VC-API does not. So, can you elaborate here? I want to make sure that when you highlight certain attacks that have been mitigated by OIDC approaches -- that those are actually attacks that are even possible with CHAPI + VC-API (and, if not, that they don't need considering). It's important to note that if certain classes of attacks are no longer even possible with CHAPI + VC-API, that is also a way of measuring reduced security complexity.
For example, what mitigations are in place for the wallet receiving a VPR to know the origin of the requestor who sent it? It appears in this flow there is no way for the wallet to confirm any relationship of the capability URL generated to that of the party that sent it via CHAPI? That may not be a true CSRF attack vector but it does involve similar concepts.
Before jumping to solutions (e.g., PKCE) to threats, we should talk about whether or not those threats exist when taking different approaches. To my understanding, the whole reason PKCE was invented was to solve an issue where oauth client secrets were published in public client applications, making it easy for attackers to get their hands on them. Then, when a user starts using an app to get an access token to do something, they need to make sure an attacker doesn't get that access token. IMO, the reason this solution needed to exist was because of an old protocol (designed for different circumstances) coming into contact with new technology (mobile apps). We should not assume that new protocols have the same problems -- we should be more deliberate.
If the security of a system depends on the security and authentication of the app itself -- and the method for securing the secret (the client secret) is not actually secure (publishing the client secret in a public app), it's true you have a threat that needs mitigating. Can you point to where this threat exists in the CHAPI + VC-API flow we've been discussing?
Yes I believe you are right about the origins of PKCE, in general it prevents a range of attacks from occuring like a party being able to maliciously intercept an authorization_code and exchange it with the token endpoint to get the access token. The concern with the VC API + CHAPI flow is that this is impossible to do because the first interaction between the issuer and the wallet just hands over the capability, a channel that involves so many components (as i've raised above), each of which have to be completely secure.
Any number of the threats that are based on "trust in the IdP / client" and ACL models may also be no longer applicable with a new design that is based on trusting math and object capability-like authorization.
Im struggling to see how this is the case, IMO OAuth2 access_tokens do behave largely like object capabilities.
I think we need to be careful to avoid a sunk cost fallacy here, that we need to embrace something old because of all of the effort put into it, without being clear eyed about why those efforts had to be made. If some of those efforts were only made to ensure that OAuth keeps working when people bolt things onto it that were not anticipated (or fundamentally change its security model!) -- that should change our calculus when considering new design choices and available primitives that avoid the threats entirely, right?
Granted OAuth2 and OIDC are complex ecosystems that have developed significantly over the past decade and originate from an entirely different point in time in internet technologies. However these are specialist protocols and I'm very wary of cooking up an alternative in the corner of a new specification which involves entirely new components for exchanging highly important security information (see above for how the capability is transmitted) and makes completely new assumptions about the parties involved in the exchange of an authorization capability. These protocols are notoriously difficult to get right.
I also just want to add a note to come back to it that I think capabilities as URLs versus capabilities as cryptographically secure tokens is another significantly limiting design choice here.
@tplooker,
...That may not be a true CSRF attack vector but it does involve similar concepts.
Does this mean that we're in agreement that a CSRF attack is not an issue here? I want to make sure we're making progress on a common understanding. I think it's best if clearly talk about specific threats -- and say whether they apply and what the mitigations could be.
As for the questions you asked:
For example, what mitigations are in place for the wallet receiving a VPR to know the origin of the requestor who sent it?
At present, a credential handler receives the origin of the credential requestor (e.g., the "requestor that sent the VPR") via the credentialRequestOrigin
property on both the CredentialStoreEvent
and the CredentialRequestEvent
. This information is trusted to be accurate based on the security model (which requires that CHAPI be secure).
It appears in this flow there is no way for the wallet to confirm any relationship of the capability URL generated to that of the party that sent it via CHAPI?
Can you say more about what the threat or use case concern is here? There are any number of things that could be possible (or not), but if we don't tie them to threats or use cases in some way, we can only work from internal biases regarding whether it's a good thing (or not) for certain knowledge to be possible or not. As an example, someone may say that "it appears there's no way to bind a user in a third party context to their identity in a first party context" -- which may be "good" from a privacy perspective and "bad" from a personalized ad perspective. I am aware that I haven't provided a specific answer to your question. This isn't because there aren't answers or more to discuss, it's because I think we might get lost in the details without any context that explains what the goal is.
Yes I believe you are right about the origins of PKCE, in general it prevents a range of attacks from occuring like a party being able to maliciously intercept an authorization_code and exchange it with the token endpoint to get the access token. The concern with the VC API + CHAPI flow is that this is entirely impossible to do because the first interaction between the issuer and the wallet just hands over the capability, a channel that involves so many components (as i've raised above), each of which have to be completely secure.
Again, let's not lose sight of the difference between "what needs to be secure for it to work" and "the difficulty of making those things secure". I really do believe we need to get to agreement on the former -- before we can sensibly dive into the latter, so let's try not to go there until we're ready.
So, if PKCE was created to plug a hole in OAuth (due to the publication of client secrets) and CHAPI doesn't even use "client secrets" nor rely on "client authentication" to provide security at all -- we should not be concerned from a security perspective about this. Obviously, if a security model requires "client authentication" then we should care -- but the CHAPI + VC-API flow we've been talking about does not. Now, if we want to say that "client authentication is required" to do some "flow X" -- which you seem to be implying, we can talk about that. But we need to be careful to avoid conflating security concerns / requirements and use case concerns / requirements. We also shouldn't assume that the particular solution employed by one system to implement a flow is the only way to feasibly do it.
Anyway, to give my perspective here, I think there's a lot of complexity in the OIDC approach that I have concerns about -- which is one reason for the simplicity of the CHAPI + VC-API approach. Note: There may be more components in the CHAPI approach (at least at the moment with a polyfill), but that doesn't mean it's not less complex in other dimensions that may matter more. So, it isn't that we aren't considering these things with the CHAPI + VC-API design, rather, it's because of them that we are concerned about OIDC. The group of people that were like "let's just use what's there" are the ones that made it so PKCE had to be invented to fill security holes. I worry that the same arguments are being made here: There's a tech that kinda looks like we could use it and it has wide deployment, so let's do that so we don't have to reinvent anything.
Generally, I'm a big +1 to not having to invent new things. I don't invent things just because I love to labor. However, the OAuth/OIDC stuff is fundamentally based on a different trust model -- where you "trust the IdP" and you add "clients" that you must authenticate via client secrets to access your stuff. The VC model was specifically designed to be different from this. It was designed to distribute trust via cryptography, not infrastructure. A party consuming a VC only needs to trust the math and the issuer -- and, often, that the presenter is in some way appropriately connected to the VC (e.g., as its subject) via math and/or biometrics. There's no "trust the IdP / authenticate the client" that delivered the VC. It's pretty much a major point of the new model to avoid that. So when we say we're just going to reuse tech that used a different model -- I'm apprehensive. Again, that's how things got broken to begin with -- such that we needed PKCE to patch OAuth. It didn't happen the other way around.
So, if we are in agreement that the CHAPI flow is secure (assuming CHAPI is secure) and we don't need to worry about CSRF and PKCE because they aren't applicable, then we can move into discussing either what it takes to make CHAPI secure or whether or not certain use cases can be covered by a CHAPI + VC-API approach. It does seem you want to touch on the latter and I also think there's a lot we could unpack there to make some progress.
If we want to go there, it seems to me that the push for the various "bindings" you and @mavarley have mentioned are not necessarily about threats in the CHAPI + VC-API model, but may be really about UX concerns. We should clearly separate these if so. A binding is only necessary for security when there's something you need to link together because there's no end to end security in a flow (there are gaps). As I've mentioned, with CHAPI, there's no confused deputy situation where the OS / whatever may hand something off to an attacker / MiTM like there is with OAuth/OIDC. Rather, the user deliberately chooses their wallet. Note: It's also the case that if the user accidentally chooses the wrong wallet -- this is a problem no matter what protocol is in use, and no amount of "preregistering" (or whatever else) will help if Alice picks Bob's wallet to link to her information.
So, it seems you are thinking about cases where a user may have indicated they want to use wallet X ... and then there's a need to ensure that if the user chooses a wallet again later that it's the same wallet, otherwise they will have a suboptimal experience (e.g., incompatibilities with issuer capabilities). In the flow we've been discussing, the user only makes a wallet selection once. Therefore, if there's only ever one time a user selects a wallet, you don't need to worry about binding that selection to "the same selection made earlier", because there is no such thing in the flow.
But, you have indicated that this approach could result in the user committing to some flow only to find out at the end that they can't use the wallet they want. Is this what we're concerned about and should discuss next?
If so, I similarly have some use cases / flows that I'm worried may be challenging to simultaneously create good UX, keep role concerns separate, and preserve user choice when using OIDC. I'd like to see how they would work from you with end to end diagrams / explanations of some sort. I've only hinted at some of this before, but it keeps coming to my mind when considering questions around the design of CHAPI + VC-API -- because I think some of its design choices were made in considering a "good enough" approach for most flows, knowing how hairy they can get.
Something I think CHAPI + VC-API does moderately well is keeping role responsibility clean. For example, with CHAPI + VC-API, a wallet is a user's trusted agent and it handles consent and helping the user make decisions, and it's up to an issuer to provide an interface for taking a user through a process for obtaining a VC (which may include any number of interesting issuer / VC specific steps). Setting these "abstraction boundaries" upfront is intentionally limiting. It limits the creativity one can use in how to go about improving UX, but it also ensures that if an issuer wants to do something in their flow you didn't think of -- they should more often than not be able to do it, because you haven't delegated control over the flow to some other party (i.e., the wallet provider).
It seems the OIDC approach mixes these responsibilities, suggesting the issuer handle consent (sometimes?) and the wallet other times (or maybe also always?) and it pushes some issuer responsibilities into a "wallet marketplace" UI with very limited primitives. If issuing a VC requires the user to go through a custom process on the issuer's website, it seems that it must happen before any of that ... or maybe in the middle of that ... which may lead to a potentially confusing multistage authentication / authorization / flow selection process for the user?
For example, consider that an issuer is able to issue one or more VCs to a user based on the user's attributes, other VCs they present, or something that they do during a custom flow. Which VCs they are issued may change as well, based on their choices in the flow. Now, walking through this using the OIDC flow diagram you provided:
.well-known
issuer URL. Note: Again, my understanding is that the wallet doesn't know who the user is / can't communicate this to the issuer (and the issuer wouldn't trust it anyway?). There's also no accessible state on the wallet about what just happened on the issuer's website (including the whole custom flow the user may have just gone through). To communicate this kind of state to the wallet, a "capability URL" or other sufficiently random identifier could be communicated to the wallet (like with CHAPI + VC-API) -- except for the fact that the OIDC flow's security model can't secure this right now as has been mentioned. Without that kind of communication, avoiding this particular problem implies it may be better to have the wallet selection happen before the custom flow. But, if wallet selection is done early, this presents other mentioned failure modes, such as the user selecting flows that they ultimately won't be eligible for. Which is worse / more common / better to avoid? In an ecosystem with commonly implemented primitives (wallet capabilities), one might think that user ineligibility would occur much more frequently than wallet ineligibility. Maybe there's just a missing primitive for this.Sorry, that got a little bit more long winded than I intended. I would have written a shorter response if I had more time :).
Anyway, these are some of my concerns around the OIDC UX. Maybe there are good answers / solutions here, but the design (in my current view anyway) seems to make it much more limiting (from an issuer perspective) and complex from a UX perspective. There seem to be trade offs that are very similar to the ones that are brought up around the UX with CHAPI + VC-API.
We don't need to jump to discussing all that if we're not ready yet or if we want to talk about other use cases that deviate from the CHAPI + VC-API flow that we've been discussing so far wrt. UX. But I didn't want to lose what was in my head. Much of the above is always coming to mind when I read that there may be use cases where CHAPI + VC-API provides a suboptimal UX. Of course, perfect is always better, but I'm seeing CHAPI + VC-API as doing at least an "ok" job for most of the cases -- and without unduly limiting user choice or restricting issuer control over their flows and what they need to store / don't need to store. I'm not there yet with the OIDC approach, but maybe it's possible.
I do expect that the "best UX" for any given use case may be using tech X and not tech Y (and vice versa for a different use case) -- so it's going to be in the margins and with an eye toward other effects and the ability to do an "ok" job for the bulk of the use cases that I think is important.
@dlongley
Before jumping to solutions (e.g., PKCE) to threats, we should talk about whether or not those threats exist when taking different approaches. To my understanding, the whole reason PKCE was invented was to solve an issue where oauth client secrets were published in public client applications, making it easy for attackers to get their hands on them.
PKCE is a mechanism to help prevent MitM attacks when crossing security boundaries ("front-channel" to "back-channel") and is now common best practice in not just mobile or public client scenarios, but for any authorization-code flow that wishes to guard against this type of attack. So much so that GNAP has this concept baked in.
And this is my point; we are not starting from zero here - identity exchange systems have been in place for a long time and many lessons learned. Ways exist, and are in use, and are accepted by industry, to secure font channel/back channel/cross device/ out-of-band session linking - they should not simply be hand-waved away.
However, to your point - GNAP does not require PKCE because it is already a core part of the protocol. Maybe VPR / VC API does not require PKCE because there is an equivalent or better mechanism in place... but it needs to be expressed, defined, and analysed.
Same for other types of security controls.
Edit to my last comment - @dlongley you have not "hand-waved away" the security controls based on your longer response. There is a lot of analysis there, I did not mean to trivialize it. I am still working through it :) thanks for that in depth view.
Anyway, to give my perspective here, I think there's a lot of complexity in the OIDC approach that I have concerns about -- which is one reason for the simplicity of the CHAPI + VC-API approach.
I don't agree the CHAPI + VC-API is any less complex than OIDC. There are OIDC flows which can be quite trivial to implement, and more complex implementations that increase the level of assurance on an interaction - same with VPR (CHAPI or VC-API) - the current discussion on VPR I think is exploring these additional complexities. (ie, with the current VPR extension points, I can make an exchange as complicated as I like :) ).
The group of people that were like "let's just use what's there" are the ones that made it so PKCE had to be invented to fill security holes.
OAuth 2.0 started as a very "simple" protocol (over OAuth 1.1 and SAML, for example). It solved a lot of problems. It had extension points. Attacks became more sophisticated, transaction and API access became higher value, and 'complexity' increased - the security profiles evolved. The proposed CHAPI+VC-API model is starting "simple". We have stated it is aiming for a lower level of assurance credential (card-not-present identity). As ecosystems evolve, attacks become more sophisticated and credentials more inherently valuable, CHAPI + VC API will need to evolve and become more complex.
The VC model was specifically designed to be different from this. It was designed to distribute trust via cryptography, not infrastructure.
Yes, agree - but in a lot of cases the same entities are involved (IdPs, RPs), and there is a need for (there is a perceived need for?) more information than just cryptography alone can provide. Issuers who are releasing information about a subject, in the issuer's name, that has inherent value, have a responsibility to their account holders/subjects to help ensure this information cannot be mis-used.
A party consuming a VC only needs to trust the math and the issuer -- and, often, that the presenter is in some way appropriately connected to the VC (e.g., as its subject) via math and/or biometrics.
And how can this trust be established if there is no context as to the software involved that is managing the secret keys, or authenticating the presenter?
As I've mentioned, with CHAPI, there's no confused deputy situation where the OS / whatever may hand something off to an attacker / MiTM like there is with OAuth/OIDC.
This is where I am weary - "there is no way that could happen" is a great opportunity for "bad actors" to find a way to break those assumptions. So then these attacks become a question of scale, not possibility. Break CHAPI and steal everyone's credentials? Bad and highly scalable. Break CHAPI, and subvert an out-of-band session binding (like PKCE or SMS code or ...) Bad, but less scalable. Break CHAPI, subvert an out-of-band security token, and impersonate a trusted holder software implementation ... well, diminishing returns maybe but you see how this goes.
Note: It's also the case that if the user accidentally chooses the wrong wallet -- this is a problem no matter what protocol is in use
Yeah, agreed. I can sign into all my kids' google school accounts from the same chrome browser because they use google SSO. Happy to take this concern off the table.
In the flow we've been discussing, the user only makes a wallet selection once.
Agreed. And SecureKey had explored using CHAPI to initiate Aries flows and WACI flows successfully as well, but subject to the same issues (with session binding/user binding etc..)
But, you have indicated that this approach could result in the user committing to some flow only to find out at the end that they can't use the wallet they want. Is this what we're concerned about and should discuss next?
Maybe that was directed at @tplooker who has expressed these concerns, but I believe the comparison is:
omething I think CHAPI + VC-API does moderately well is keeping role responsibility clean. For example, with CHAPI + VC-API, a wallet is a user's trusted agent and it handles consent and helping the user make decisions, and it's up to an issuer to provide an interface for taking a user through a process for obtaining a VC (which may include any number of interesting issuer / VC specific steps).
Same as OIDC? That's what the redirect does?
If issuing a VC requires the user to go through a custom process on the issuer's website, it seems that it must happen before any of that ... or maybe in the middle of that ... which may lead to a potentially confusing multistage authentication / authorization / flow selection process for the user?
Hmm, I am not going to declare that a protocol alone prevents poor UX design - OIDC does have its UX challenges - but in an OIDC Flow there is a cohesive context. A subject may be jumping through a redirect, but there is a reason and workflow to do so. On the other hand, if a workflow flow is too decoupled (a subject starts doing one thing, then suddenly finds they have to do a new thing after already finishing the one thing... but can't complete the second thing in time...) seems like a broken workflow. The person starts playing checkers, only to find out they were actually playing chess...
For example, consider that an issuer is able to issue one or more VCs to a user based on the user's attributes, other VCs they present, or something that they do during a custom flow.
I might argue that your initial assumption is incorrect. I would not build a workflow the subject cannot complete without an OIDC exchange later. Instead, the workflow starts with OIDC. Or, the workflow completes successfully (with maybe a printable VC?) and the subject is presented with an option to load it into a digital wallet (which may require another login, but the workflow is clear). So, counter example:
The above is not perfect, and may have gaps that are currently being worked on wrt. required presentations etc; I will note that the above can also enable a direct wallet-to-issuer VPR communication exchange, but interact
is not necessarily required, as presumably all the interaction has already occurred.
Sorry, that got a little bit more long winded than I intended. I would have written a shorter response if I had more time :).
Hopefully I was able to parse and properly address some of these ideas; I think this was great to "air out". Thanks to anyone who read my response this far :)
As far as UX, I have seen NFT marketplaces allow a user to link their wallet to their account, and then the marketplace site has an integrated view of that wallet via API hooks - maybe there is a CHAPI flow which supports the same (the iframe embeds, or doesn't close after the .get()
or .put()
completes...) All I'm saying is there may be ways using CHAPI and VC API exchanges / VPR to "bind" the wallet and issuer account / session early in the process so the person gets the workflow feel, instead of the completely dis-assocated approach that the above CHAPI + VC API flow implies.
Does this mean that we're in agreement that a CSRF attack is not an issue here? I want to make sure we're making progress on a common understanding. I think it's best if clearly talk about specific threats -- and say whether they apply and what the mitigations could be.
Sure, I get the existing definition of CSRF perhaps does not fully apply here, but I dont think this means the attack vector doesn't exist even if it does not fit neatly into an existing definition.
At present, a credential handler receives the origin of the credential requestor (e.g., the "requestor that sent the VPR") via the credentialRequestOrigin property on both the CredentialStoreEvent and the CredentialRequestEvent. This information is trusted to be accurate based on the security model (which requires that CHAPI be secure).
Sure but none of this is afforded when talking to native apps from CHAPI right (via web share)? So this only works for web wallets?
The group of people that were like "let's just use what's there" are the ones that made it so PKCE had to be invented to fill security holes
As a counter though, the fact that a protocol which has been widely used for over a decade was able to easily extend to mitigate this threat vector IMO is a testament to the protocol. We should be careful not to wield the sword of hindsight against protocols like OAuth2, for example saying it didn't predict the need for this particular mechanism as a form of critism I think is miss placed, the reality is the protocol easily adapted with the addition of a simple extension.
Also it is this very flexibility within OAuth2 OIDC that I dont see with the CHAPI + VC API that has me concerned.
I worry that the same arguments are being made here: There's a tech that kinda looks like we could use it and it has wide deployment, so let's do that so we don't have to reinvent anything.
Yes to be totally clear I am making this argument although I dont think it "kinda looks like we could use it" im convinced, as are many others that we can, it doesn't mean we dont have to reinvent anything, but invent less, yes definitely.
If we want to go there, it seems to me that the push for the various "bindings" you and @mavarley have mentioned are not necessarily about threats in the CHAPI + VC-API model, but may be really about UX concerns. We should clearly separate these if so. A binding is only necessary for security when there's something you need to link together because there's no end to end security in a flow (there are gaps). As I've mentioned, with CHAPI, there's no confused deputy situation where the OS / whatever may hand something off to an attacker / MiTM like there is with OAuth/OIDC. Rather, the user deliberately chooses their wallet. Note: It's also the case that if the user accidentally chooses the wrong wallet -- this is a problem no matter what protocol is in use, and no amount of "preregistering" (or whatever else) will help if Alice picks Bob's wallet to link to her information.
Yes to be clear here it is between the wallets request for the credential (onbehalf of the end user) and the end-users authentication (and potentially consent) with the issuer to issue the credential.
So, it seems you are thinking about cases where a user may have indicated they want to use wallet X ... and then there's a need to ensure that if the user chooses a wallet again later that it's the same wallet, otherwise they will have a suboptimal experience (e.g., incompatibilities with issuer capabilities). In the flow we've been discussing, the user only makes a wallet selection once. Therefore, if there's only ever one time a user selects a wallet, you don't need to worry about binding that selection to "the same selection made earlier", because there is no such thing in the flow.
Yes this is because your mental model starts with the End User visiting a website, authenticating, invoking CHAPI and then issuing the credential, what I'm saying is I dont think that is right ordering of events in many usecases. In many cases it should be visiting a website, invoking CHAPI, authenticating the user (or just checking they already are on the device you are accessing the wallet from) then issuing a credential.
However, the OAuth/OIDC stuff is fundamentally based on a different trust model -- where you "trust the IdP" and you add "clients" that you must authenticate via client secrets to access your stuff. The VC model was specifically designed to be different from this. It was designed to distribute trust via cryptography, not infrastructure. A party consuming a VC only needs to trust the math and the issuer -- and, often, that the presenter is in some way appropriately connected to the VC (e.g., as its subject) via math and/or biometrics. There's no "trust the IdP / authenticate the client" that delivered the VC. It's pretty much a major point of the new model to avoid that. So when we say we're just going to reuse tech that used a different model -- I'm apprehensive. Again, that's how things got broken to begin with -- such that we needed PKCE to patch OAuth. It didn't happen the other way around.
Personally I think the assurances that fancy maths (cryptography) can supply alone in the application of digital identity technologies is overplayed, people appear to approximate the usage of cryptography in cryptocurrencies and conclude that it alone can serve a similar basis for identity technologies. The latter in many ways is far more complex and involves way more social constructs that makes it incredibly difficult to just say "trust the math". We need to be clearer on what the purpose of the cryptographic binding established when a credential is issued actually even represents, to me in many circumstances its an authentication factor at best but alone proves next to nothing about whether the End-User (true credential subject) is involved in a credentials presentation.
If a person has two sets of cryptographic keys one tied to their holdings in bitcoin and another tied to their drivers license, the situations that might cause that individual to share those keys are very different. An individual is probably less likely to share full access to their bitcoin balance (1st key), but for the right individual they may choose to share their drivers license so that person can say buy some beer with their drivers license, needless to say the outcome and consequences are very different.
For example, consider that an issuer is able to issue one or more VCs to a user based on the user's attributes, other VCs they present, or something that they do during a custom flow. Which VCs they are issued may change as well, based on their choices in the flow. Now, walking through this using the OIDC flow diagram you provided:
There is a lot of really good feedback in here that I want to be able to respond to however Im not sure a GH issue is the best way to do it, I could try capture it in a google doc? Otherwise happy to just respond inline to the GH issue.
Same as OIDC? That's what the redirect does?
This is a really important point that I think is being overlooked, what happens when you execute a redirect into an OIDC flow is completely up to the issuer and you have the full power of the web to build whatever user journey you want including authenticating the user, so I also dont really follow the argument that OIDC is overly limiting in this respect nor do I see a blurring of responsibilities. To be clear OIDC discovery is optional in the flow I shared above, the wallet doesn't have to do this nor do the credentials offered by a provider need to even be published in the metadata, the important thing is that the user gets authenticated and the wallet gets authorized to obtain the credentials its after.
To offer more context, if you want to accomplish a flow where you want to trust the channel in which you are passing something to the wallet and skip user auth on the device you are provisioning the credential on, then we are working on this variation in OIDC CP /OIDC4VCI known as pre authorized code flow, however there are a few features I think that make it quite different to the one shared above.
I was thinking about a flow where a CHAPI initiated flow could help ensure a "tighter subject binding" through a flow, and support an early fail/bailout in case the person does not have a compatible wallet...
(source: https://bit.ly/3xc1nmC)
In red there are internal calls between the Issuer and VC API service that are out of scope (I think). If the Holder calls the exchange endpoint "too early", the Issuer (Daily Planet) can deny providing the credential data.
Not sure how to structure the VPR messaging to support the callback/webhook (and maybe another CHAPI flow is required) but the point is (that I am trying to get at) is by breaking up the CHAPI flow above, there is a way to achieve a 'workflow' like experience that also helps bump up the level of assurance.
I am not suggesting all CHAPI flows need to change to the above; merely exploring possibilities.
PS: I will note that in step 2 in the diagram in https://github.com/w3c-ccg/vc-api/issues/279#issuecomment-1085175331 this call is likely authenticated/authorized and results in the exchange-id as a bearer token, which is also acting as the authorization to execute the exchange API -- seems like we danced around a lot, but the VC API still requires authorization ?
@mavarley,
I don't agree the CHAPI + VC-API is any less complex than OIDC. There are OIDC flows which can be quite trivial to implement, and more complex implementations that increase the level of assurance on an interaction - same with VPR (CHAPI or VC-API) - the current discussion on VPR I think is exploring these additional complexities. (ie, with the current VPR extension points, I can make an exchange as complicated as I like :) ).
That's not the complexity I was referring to. What flows you implement inside of a particular protocol and how complex those are / can be, is a question for a layer above the protocol itself. Ideally, a protocol keeps responsibilities separated by role, offers good composition via simple primitives, and gives people as much freedom as possible for their flows at that layer.
What I was talking about above was what every user of the protocol itself needs to do regardless of their particular flow, e.g., how many components must they implement (or use libraries for), which extra options do they need to consider (PKCE? DPOP?), etc. to ensure the security model works and so forth. To this point, one complexity I mentioned that isn't needed in the CHAPI + VC-API security model is software client registration and authentication.
To this point, one complexity I mentioned that isn't needed in the CHAPI + VC-API security model is software client registration and authentication.
If we take the purpose of registration in OIDC to be the ability for the wallet to express what it supports so that it obtains a VC it can actually use, then I dont believe this isn't needed in the CHAPI + VC-API, it may mechanically be achieved in some other manner but its a stretch to say that the way it chooses to achieve this is an overall reduction in implementation complexity.
Similarly Im not sure what is being defined here as client authentication so its difficult to know what you are talking about and hence whether it represents implementation complexity that CHAPI + VC API some how escapes.
At present, the
/exchanges/*
endpoints are being implemented to have optional authz. That is, implementers can add authz to those endpoints if their use cases require it.At least one implementer does not require authz for performing presentation exchanges, but rather does authn/authz in the
/exchanges/*
protocol itself. One use case has the Issuer implementing the/exchanges/*
endpoint, a Holder engaging with the endpoint, and the Issuer responding with a DIDAuth request to authn the Holder. This does not utilize OAuth2 to establish software client authz.This approach has been challenged to be insecure, so this issue is to discuss the attack models and security characteristics of this approach.