Closed msporny closed 4 years ago
- To me this looks like a signing API more than an issuing API.
It's more of a signing API than the other proposal, but does allow variability that doesn't make it purely a signing API (for example, it considers transformations on the input data to get to an output value)... like if the input representation is JSON-LD but the output representation should be a base-64 encoded JWT, or a Sovrin credential.
If you are already passing in a fully formatted credential (minus the proof), then where and how was that created in the first place?
It's created by the application that knows the type of credential it wants to create.
There may be convenience functions in the Issuer that give it that knowledge (templates, wizards, etc.)... but that's a nice to have instead of a requirement.
- The API says it returns a "Verifiable Credential, which is wrapped in a Verifiable Presentation". I don't understand this, why would an Issuer API return a Verifiable Presentation?
The issuer API may want to annotate the issuance of the VC with other information and can do that via the Verifiable Presentation. If we don't do it that way, we'll have to invent a new encapsulating container and the argument against doing that will be: "But we already have a Verifiable Presentation and it can be used for that because the issuer is presenting the VC to a holder.
- I added some minor comments about an apparent bias towards JSON-LD, which I think should not be there.
Any bias there was unintentional, I was just writing to what I know best (and could generate working examples for at the time). My expectation is that we fill this out for JWT, LD, and Sovrin credentials.
In general, maybe the PR could be rewritten to modify the existing API, rather than completely replacing it?
The existing API currently has a fair number of endpoints that are not necessary for basic interop. Digital Bazaar is currently concerned about a number of those endpoints, so we'd like to build up from this PR. I personally see the existing API as a good destination, after each endpoint has had considerable discussion. I don't think anyone should look at the existing API and think it has any consensus around it (wrt. implementations). For example, when I reviewed it, I looked at it briefly, it seemed like it was ok, and then when our engineers got into implementing stuff, there were lots of issues that were raised wrt. implementing something that could get us to interop.
For example, I could imagine the new "credential" and "validation" fields to become new optional properties without removing the existing ones..?
We think some of the existing fields are problematic, and would like to pick the minimum set that we can all agree to rather than everything that exists in the existing API.
Perhaps this PR could also be broken up into multiple smaller PRs, since it seems some of the changes are unrelated to each other and could be discussed separately?
I'm concerned about the time it would take to review and come to consensus on the entire existing API (and the time pressure that all of us are under to demonstrate interop). I'd like us to focus on the one thing that can get us interop rather than a bunch of other nice to have features (that we'll eventually get to, but are a lower priority than the most basic API that gets us to interop).
Overall comments:
I agree with what @peacekeeper said.
I think this PR should be for a simpler endpoint(s) in the existing API spec, to unblock interoperability.
I think we should avoid the word sign
as it precludes other proof formats.
We should make format
required enum ['ld-proof', 'jwt'].
We should make the endpoint descriptive for the simple use case (its not constructing, just validating and issuing.
POST /issue-vc
/issue-vp
In the more complex case, we should make the api mutation / assembly clear in the route:
POST /construct-and-issue-vc
/construct-and-issue-vp
the wording of these routes could also be basic
and complex
... but I'm a fan of verbosity at this stage of development.
Lastly, the process of issue-vc
is doing much more than applying a proof mechanism. It is validating the input and ensuring that it is valid wrt the context... which is a required field here... which means that jwt variants should never be produced with terms not defined in the context.... something which is not accounted for in pretty much every jwt vc library I have seen (most ignore the context / don't check for undefined properties when applying the signature)... this can yield something which looks like a VC, but is just a JWT with invalid embedded json-ld.
as @msporny pointed out... I'm also very concerned that we won't be able to agree to something useable if we try and tackle the more complex case first... lets agree to a simple api which we can upgrade from, not get stuck trying to build the whole car at once.
I just want to address an implicit elephant in the room regarding exchanges of this and other semantic data between DIDs for the purpose of accomplishing various activity flows: when there is an inevitable expansion of needs for the types of p2p exchange flows between DIDs - such as: negotiating reservation bookings, secondhand sales offer/response, etc., are we going to create one-off HTTP endpoints/conventions for all those atomically? Could we perhaps come up with a semantic formula that allowed us to reuse the same technical mechanism to exchange messages of any schema that can be grouped and acted upon to perform any desired flow, including this one?
Could we perhaps come up with a semantic formula that allowed us to reuse the same technical mechanism to exchange messages of any schema that can be grouped and acted upon to perform any desired flow, including this one?
@csuwildcat
thats the point of this spec... to support what you are asking for here.
The mechanism for delivering this request is being debated here.
In the simple case, where the client knows what they want the issuer to sign, this is trivial.
In the more complex case, where the client is submitting different bits of data / VCs as a presentation, and receiving a VC or VP... its complicated... defining that interface is part of our job in this work item... this where the overlap with https://github.com/decentralized-identity/proof-presentation is significant... I'm inviting you and everyone in the DIF who is working on that to comment here... if you can describe an http transport for that work, clearly, then maybe it can be used here...
If you can't formalize the proof-presentation work for http, you won't be able to recommend it here... since this is http specific specification.
Generally speaking, the proof-presentation repo is about supporting the "complex" case...
A naive approach at adding support for it would be to define an HTTP Endpoint which took a POST body of form "Presentation Submission" which relied on "Presentation Definition".
Any chance anyone from DIF would consider opening an issue / PR to do this?
@msporny: @csuwildcat is making the assumption that the API we're designing here is the principal interaction mechanism between issuers and prospective holders, when the goal is to produce an issued credential. This is the very assumption that made me concerned and that I commented about above. Having the API be "internal" (an interaction mechanism between different entities within the issuer) has very different requirements than having it be "external" (an interaction that crosses sovereign identity boundaries). Security and privacy analysis are different. Duties and roles are different. Who can initiate is different.
The concern Daniel raises is another reason why I don't like web APIs for this problem and many others, if we want to support SSI: there's a proliferation of endpoints. One of the characteristics of DIDComm's peer-to-peer approach is that there's only one endpoint, no manner how many protocols you want to run. It's a message-processing endpoint that delivers exactly what Daniel is asking for ("a semantic formula that allowed us to reuse the same technical mechanism to exchange messages of any schema that can be grouped and acted upon to perform any desired flow, including this one"). But nobody in the CCG wanted to contemplate my suggestion that we define this API as an exchange of messages (which would allow HTTP clients but also DIDComm clients), so now we are inventing a whole new solution that's essentially a competitor to what could be a viable DIDComm-based standard that already has interoperable deployments in production, as well as a fairly good written spec that supports all the CCG's credential formats. And we're saying that doing this in a peer-to-peer-friendly way is a "hard requirement" even though we have a perfectly good working implementation that does all of this, in open source with an Apache 2 license, for multiple programming languages. It feels to me like the W3C CCG is reinventing the wheel because there's a prejudice for web technologies (perhaps natural, given the W3C sponsorship) and thus against DIDComm.
I could shrug my shoulders and just say, "Oh, we'll just let the different approaches sort themselves out in the market. Best ideas win." But the explicit impetus of this effort was a request from the US government's DHS, that wants to require all VC issuers to use the same standard API. This is what's making me double down on the argument: the proposed direction picks an architecture I don't believe in for external interactions, and the context forces me to comply with it or be marginalized. Can I talk you into giving up one of these goals?
@OR13 :
thats the point of this spec... to support what you are asking for here.
I think not. This spec is about only one subcategory of messages and one subcategory of schemas (those involved in issuance). I took Daniel's ask as being far broader -- any possible interaction type. @csuwildcat Did I misinterpret you?
Regarding DIDComm, if there is a way to use DIDComm messages and HTTP that is stable enough to be pulled out and applied here, I'm all for that....
A specific set of http endpoints could be added to this spec to support DIDComm.
POST /didcomm/issue-vc
... etc. In order to do this, DIDComm would need to express a credential issuance flow for VCs and VPs over HTTP.
I think it would be worth sketching out what that would look like... maybe including the proof-presentation
work at the same time...
@OR13
Regarding DIDComm, if there is a way to use DIDComm messages and HTTP that is stable enough to be pulled out and applied here, I'm all for that....
Would you consider tunneling HTTP messages over DIDComm?
If you place an adapter between the client system and this didcomm endpoint, you could use http-over-didcomm.
Would you consider tunneling HTTP messages over DIDComm?
@llorllale
It sounds interesting, but if the goal is to provide a unified HTTP API that supports both vanilla HTTP and DIDComm, I don't think thats a solution to this issue about supporting both institutional and ssi interfaces.
Does DIDComm work over HTTP?
@msporny
The existing API currently has a fair number of endpoints that are not necessary for basic interop. Digital Bazaar is currently concerned about a number of those endpoints, so we'd like to build up from this PR.
All endpoints except for the one that issues a credential are marked as optional. There has been support for removing some of them. There is also a comment to keep them. See https://github.com/w3c-ccg/vc-issuer-http-api/issues/1.
Either way, I don't see what the discussion about optional endpoints has to do with proposing major changes to the single required endpoint?
Would you consider tunneling HTTP messages over DIDComm?
@llorllale
It sounds interesting, but if the goal is to provide a unified HTTP API that supports both vanilla HTTP and DIDComm, I don't think thats a solution to this issue about support both institutional and ssi interfaces.
Does DIDComm work over HTTP?
The way I read some of the HTTP API approach here seemed to indicate it was going for a REST route-based scheme for addressing the functions, vs having the functions triggered by the semantic message-based object types and values of the messages themselves. If this is true, could we instead treat HTTP as a dumb pipe, through which we pass messages, the contents of which drive the interaction, not the routes over which the payloads are sent?
@msporny @OR13 @peacekeeper as I read the API, two things are missing:
@awoie what anchoring do you envisioning taking place here? Schemas should already be published, so should the relevant identifiers.
@awoie what anchoring do you envisioning taking place here? Schemas should already be published, so should the relevant identifiers.
E.g., time stamping of the VCs. I saw some people having on-chain VC registries without talking whether it is good or bad practice. That is just one example of a background process.
@awoie great points!
I totally agree with the async point... There are some common api patterns for handling 202 Accepted with Location Header which we might want to consider.
@OR13 and @csuwildcat : I hope this is a tangent because we decide the API that's the subject of this PR is meant to be called only by entities within the identity of the issuer. However, I wanted to describe what a message-oriented HTTP API would look like in my imagination, since both of you asked about my mental model. To avoid bogging down this thread overly with possibly tangential details, I've just put the write-up in a doc: https://docs.google.com/document/d/1X4vLF3EFoqFQKsTAxPfGCgJAQ4xKqoGewDHq0xTGMp8/edit#
@dhh1128 Very interesting writeup! Reading it, I can't help but think that what you're envisioning is basically JSON-RPC - it's async, the semantics are entirely determined by payloads, etc.
@dmitrizagidulin : Yes, I agree that what I described is quite similar to JSON-RPC in many respects. DIDComm as a whole might have settled on JSON-RPC with extensions, if it weren't for some of the routing and encryption requirements. For the limited use case I was writing up, JSON-RPC is a close match.
There are several different discussions happening in parallel now.
The main change that this PR proposes seems to be this:
Other discussions should probably happen elsewhere (feel free to raise separate PRs for those!):
It is worth pointing out that this:
A credential (minus the proof) has already been created elsewhere.
Conflicts with objective no. 2 from the architecture model proposed in #16.
@llorllale wrote:
It is worth pointing out that this:
A credential (minus the proof) has already been created elsewhere.
Conflicts with objective no. 2 from the architecture model proposed in #16.
Hmm, I don't see it that way. Fundamentally, we shouldn't be "abstracting" the underlying data model... because the data model is the Verifiable Credentials Data Model... and we need to deliver it via some representation of the data model. If the architecture document is suggesting otherwise, I'd argue that it is just flat out wrong. We shouldn't be inventing a new data model, we should be using the one that we have consensus and a global standard on (the Verifiable Credentials Data Model).
I'm still in favor of splitting VC/VP Construction and VC/VP Issuing into separate APIs...
Combining them is where we are having disagreement, but isn't it true that we could agree if they were separated?
POST /vc/construct { options } => VC Without Proof
POST /vc/issue { vc without proof, options } => VC With Proof
POST /vp/construct { options } => VP Without Proof
POST /vp/issue { vp without proof, options } => VP With Proof
Then the following one shot APIs which just do all the work in one request handler...
POST /credentials { options } => VC With Proof
POST /presentations { options } => VP With Proof
There will be services which the issuer has access to that the client does not... this will require some of these objects to be constructed on the server... for example linked Identifiers, such as SSN / Drivers License / Manufacturer ID / Purchase Order / Invoice Number / Tracking Number / IP Address / Account ID / User Name, etc...
The server (issuer) will have access to services to look these up and apply them correctly... In more complex scenarios, the issuer might even blind the client from seeing them, as is the case for encrypted JWTs...
Combining them is where we are having disagreement, but isn't it true that we could agree if they were separated?
Yes, exactly what I was thinking, too...
Here's a proposal along that direction: #20
@msporny
@llorllale wrote:
It is worth pointing out that this:
A credential (minus the proof) has already been created elsewhere.
Conflicts with objective no. 2 from the architecture model proposed in #16.
Hmm, I don't see it that way. Fundamentally, we shouldn't be "abstracting" the underlying data model... because the data model is the Verifiable Credentials Data Model... and we need to deliver it via some representation of the data model. If the architecture document is suggesting otherwise, I'd argue that it is just flat out wrong. We shouldn't be inventing a new data model, we should be using the one that we have consensus and a global standard on (the Verifiable Credentials Data Model).
We don't see it that way but we are certainly open to discussion.
Combining them is where we are having disagreement, but isn't it true that we could agree if they were separated?
POST /vc/construct { options } => VC Without Proof
POST /vc/issue { vc without proof, options } => VC With Proof
I'm not sure if there would ever be any point in the first API (VC Without Proof). Both the current API definition and the one proposed by this PR return a VC With Proof. The difference is only in the input, not the output.
My idea would be to just add an optional input parameter called "credential" to the current API definition.. If it's present, then the behavior is what you call "issue". If it's not present, then the behavior is what you call "construct"+"issue".
My idea would be to just add an optional input parameter called "credential" to the current API definition.
The concern there is that it's overloading the API to do too many things (composition, syntax checking, transformation, signing). It complicates implementations because now there are even more combinations of inputs that lead to error paths.
I think we have agreement on the following:
We do not seem to have agreement on the following:
Unfortunately, this puts the effort at a standstill until we can get agreement on how to address the latter two items above.
Here are the workable options that I can see presently:
The first option kicks the can down the road, but in a way that allows us to make immediate progress. The second option requires us to have a difficult discussion around what the minimum viable API is going to be.
At this point, we need to know what approach the folks involved in this PR (and others like it) would like to do. The first option might not require a call, the second one will require one or more calls.
Personally, I'm fine with either option for the time being, but would prefer the latter.
My observations are that we should not progress with this API yet because:
I suggest closing this PR and moving the discussion to proposals on issues, and getting more consensus there before attempting to make further changes.
Before merging this we should publish a release for v0.0.1:
https://github.com/w3c-ccg/vc-issuer-http-api/releases/tag/untagged-60fb0f0df91fa51e839c
I suggest closing this PR
Agreed, closing the PR.
This is a proposal for a single HTTP API endpoint to do VC issuing. The hope here is that we can start with one API endpoint that we can all agree is useful and then build on that (instead of starting with a set of optional endpoints that we may or may not want to use).
The question this PR asks is: What is the minimum required API to get to interoperability?