Closed mxshea closed 2 months ago
@mxshea , agreed. In fact, we just merged this requirement yesterday:
https://uncefact.github.io/spec-untp/docs/specification/VerifiableCredentials
"MUST implement the did:web method as an Organizational Identifier"
(The formatting sucks. That's my bad, and I can't blame you for overlooking it! We will need to get that cleaned up)
@nissimsan, within the requirements, it is stated that implementors:
MUST implement the enveloping proof mechanism defined in [W3C-VC-JOSE-COSE] with JOSE (Section 3.1.1)
I'm curious why we are prohibiting the use of an embedded proof, given that it is allowed in the VCDM.
@nissimsan, I noticed in the Normative References
section that we have linked to the V2 W3C VCDM.
Will it be a requirement that implementors conform to the V2 W3C VCDM?
Hey @ashleythedeveloper, fair questions - cheers!
First of all, I am attempting to align around requirements which I see are gaining gravity elsewhere. So pls see it as an attempt to make the world a simpler place. The more choices we can agree on, the less ambiguity for the world -> less implementation effort and more interoperability...
Additional arguments for signing mechanism include:
@nissimsan @ashleythedeveloper I'd like to make a case for data integrity proofs embedded proofs as a securing mechanism which is a W3C Candidate Recommendation Draft.
There are many active test suites available to demonstrate interoperability:
It aligns with the GS1 VC whitepaper
There are already many known implementations
It allows for parallel signatures It allows for selective disclosure
BC Gov is evaluating DI proofs as a bridge for their existing VC infrastructure and the UNTP specification and we are interested in participating in a first implementation demo for the BCMinesActPermit.
I don't believe data integrity proofs are a good choice for securing complex trade data (or really any data, since they secure transformations of media type serializations, they do not secure the bytes or data directly).
See: https://tess.oconnor.cx/2023/09/polyglots-and-interoperability
Data integrity proofs that use https://www.w3.org/TR/rdf-canon/ are trivial to exploit.
At best this leads to variable runtime for verifications, where the runtime increases with the input size of the data... up to failing due to the computation exceeding a timeout. At worst, its a a denial of service vulnerability or additional JSON Schema based error handling cases, see: https://www.w3.org/TR/rdf-canon/#dataset-poisoning
It allows for parallel signatures
This increases verifier burden, and created interoperability issues... which signature matters? How many signatures is too many? (COSE and JOSE & CMS have always supported parallel signatures, see https://datatracker.ietf.org/meeting/119/materials/slides-119-pquip-composite-vs-parallel-signatures-comparison-00)
It allows for selective disclosure
Is selective disclosure a requirement? Have you measured the performance trade offs between data integrity proofs and the alternatives being developed, such as ISO mDOC and SD-JWT?
It does not take but a few minutes to convince yourself there are more performant alternatives available
Even if you plan to process the data model in RDF (as triples) using Data Integrity Proofs is a mistake, since it does not actually secure the JSON-LD bytes.
I'm not surprised to see BC Gov evaluating data integrity proofs because of the desire to deploy anonymous credentials. Data integrity proofs do provide a straightforward way, to start using and testing experimental cryptography without waiting for code points or standardization (from NIST / CFRG / IETF).
At some point, experiments become defacto standards, and perhaps data integrity proofs will "win" the market share of digital credentials... and replace JWTs as the dominant credential format for identity and access use cases.
Until then, I'd recommend using JWT to secure JSON (JSON-LD if you must), and conservative (well supported) cryptography such as ES256.
Since the above links to a graph in a repository of mine, I feel compelled to note that the above description and arguments are misleading. I'm sorry to say, but it would also appear that none of the counter arguments or perspectives have been offered despite the author being aware of many of them. The reader is expected to dismiss something essentially in "a few minutes". A careful reader should take note of these things.
I consider the above to be a form of the Gish Gallop rhetorical method -- and given how often the above is deployed in the service of pushing a particular approach, there is simply not enough time for anyone to be responsive to it in all of its various places and forms. All one can sensibly do is call it out by its contours.
I wish this community the best in making a well-informed decision. This includes considering the trade offs in full; such trade offs will always exist with technologies that take considerably different approaches and offer different features.
This thread is getting interesting. I have four comments.
I'd strongly encourage input from people with deeper technical expertise than myself - but any technical comparisons must be between solution alternatives that meet the business needs described.
Either way, we will be open to all technical solutions that meet requirements including IETF stuff as well as W3C stuff and anaoncreds stuff - and will have an opportunity to road test them during pilots. So unless this argument is decisively resolved before pilots, we'll just design the pilots to test each proposed solution and let the results inform our recommendations.
@onthebreeze,
So unless this argument is decisively resolved before pilots, we'll just design the pilots to test each proposed solution and let the results inform our recommendations.
I think this is a fine strategy, just be careful to design the test criteria to measure and confirm or dismiss the correct hypothesis.
@OR13 thanks for providing more information behind the proposal
These requirements were pulled in from the DHS SVIP/CBP contract work without much discussion (at least none that I could trace back to). Will the UNTP be an extension of the CBP work or should it specify it's own set of requirements catered to it's architecture design?
My issue with cutting so early with strong requirements lies with a solution first approach with little consideration for the problem to solve. However engaging early in arguments for each suggestions is good to lay out the options/opinions.
The embedded vs enveloped discussion is an area of strong disagreement within the w3c community and it's not a decision to make without consideration as it involves more that just the signature. It will affect how entities communicate with each other and how data will travel. It's still not clear to me how DPP will move around. I feel it will be a very different data exchange model between entities while the supply-chain is being built vs. when it will be discovered after the fact (transforming the product vs. discovering how the product was transformed). Not everything can rely 100% on discoverability.
My arguments are mostly around the Conformity Credentials since this is where I will put most of my attention towards.
There are very few implementers involved at the moment and even less so technical implementers. BC Gov just went through a round of funding to develop a w3c VC data model bridge catered to the anoncreds specification, led by DSR Corp and other reputable organizations. They are now looking to be amongst the first technical implementers of this specification as an authority mostly around the conformity credential for permits, leases and licences (and maybe sustainability claims down the line). This is a valuable contribution and entity to have this early in the prototyping.
While interesting, the ietf scitt architecture model seems very different than what I understand this specification to envision. There is a strong focus on discoverability in the UNTP design instead of direct exchanges between partners. Was this formerly addressed?
For the argument around tooling, with the reasoning that we should defacto chose whichever technology has more existing support is an anti innovative approach. Creating new tools is only a matter of time and investment. Whatever outcome from this work, BC Gov will make it's contributions available where applicable as open source software through the aries cloudagent python project which could become an ideal functional agent for future implementers. Aca-py already demonstrated interoperability with the vc-api as an issuer/verifier and an open sourced controller has been made demonstrating traceability interop, notably covering statuslists. Aca-py was also shortlisted and highly commanded as an Open Source digital government identity solution by the UNDP last February.
Mandating JWT instead of JSON-LD for the VC model seems odd when hosting public credentials at an endpoint for discoverability, which is how conformity credentials will likely be handled.
AFAIK selective redaction is still mentioned in the confidentiality section.
Glad this can stimulate interesting conversations.
COSE and JOSE can both be used to secure arbitrary content types.
Since you mentioned SCITT, that IETF WG chose to focus on COSE, so for example a JWT can only be submitted to a transparency service conforming to SCITT if it is wrapped in a COSE Sign1.
This is similar to how JSON-LD application/vc+ld+json can be signed with JWS or COSE Sign1.
ISO mDoc also uses COSE Sign1.
In order of interest considering innovation, security and performance (size and compute time), I would recommend the following for JSON based credential formats:
JWT... And when they are read: SD-JWT,JWP
For COSE / CBOR based claimsets:
CWT... And when they are ready: SD-CWT,CWP
There are other ways to sign json, you could use PGP, SSH signatures, DSS, DER encoded raw signatures, etc....
CFRG and JOSE are working on BBS proofs for JSON and CBOR.
The german government has developed repudiable signatures based on key agreement, to reduce the damage of stolen credentials that might have been based on non repudiable signatures.
It's not reasonable to ask verifiers to support all these mechanism, and the reality is that some of these schemes have very few independent implementations, or industry penetration, making them poor choices to recommend, if the goal is to make it easy to generate conformant credentials.
It's true, you can define semantic models, and if regulators are convinced of the value of the model, they can mandate a specific serialization and securing mechanism, to try to ensure interoperability and make adoption easy.
That's essentially how ISO mDoc works, instead of JSON-LD, it uses ISO namespaced claims.
The UN could do the same, and if everyone agreed to use the UN vocabulary, a lot of global supply chain security modeling issues might be addressed... Or the model could become a source of frustration due to the choices made and their lack of industry support despite being mandated.
The choices you make when designing a profile, determine how much value it can provide, and also who can help you deliver that value at scale.
It's important to make sure you have sufficient buy in from implementers.
By making choices, you attract support from people who want to work on a particular technology stack and you repel people who don't want to work on a particular stack.
I don't know what the right choices are for this group, but I've got opinions about what technologies I want to work on.
FWIW, I think the traceability work has outgrown it's incubation period in the W3C CCG, and I would love to see it shut down, and the parts that are actually useful move to an organization that can properly support an international view of supply chain risk management.
@onthebreeze,
- Not picking winners sounds good, but only pushes these choices downstream to whoever is looking to us to make recommendations. The article on polyglots argues this point well. It would be easy for us not to choose. I'm victim myself, suggesting we take the easy route. But more ambiguity leads to less interoperability.
- You are conflating payload with proofing method. This is not a discussion about support for JSON-LD, but whether the signature is applied at the JSON-level or the expanded LD graph-level.
- +1.
- We agreed two meetings ago to descope selective disclosure/redaction from UNTP requirements, so let's not surface that as arguments. Also, I don't see how that ties back to performance considerations which IMO should be a factor for us, e.g. UN sustainability goal 12 I referenced earlier.
Thanks @nissimsan - agree that it ambiguity challenges interop. (1) Lets aim to be as specific as possible but where we are making choices between options then make sure our recommendations are informed by evidence from testing. (2) Noted about JSON-LD signatures. (3) - cool. (4) Regarding selective redactions, yep, it's parked for now. Only mentioned it because I thought maybe that performance graph included some selective disclosure methods.
My suggestion is that the protocol definition should be as decentralised as possible, but not necessarily aligned to any one higher level protocol (KERI, OIDC4VC, DIDCom). With that in mind, I would suggest that this initiative should test out the Trust Spanning Protocol which has been designed under the TrustOverIP foundation. The first version of the protocol is soon to be in implementation draft.
Locking in any one set of VC definitions, VIDs (verifiable identifiers) and business level protocol isn't a good idea at this stage IMHO. However, we would look to test this with an initial set of agreed technologies. A lot of the options have been identified in the threads above.
Discussed on the call. Next step actions to close this issue:
Unfortunately the Thursday call falls around the 3am mark for me so I'll be unable to attend those.
For BC's participation in a first round of pilot implementation, we will be looking at the following: Publish a signed BCMinesActPermitCredential
The issuing platform will aim to be compliant with the following test-suites:
We will leverage the EECC vc-verifier project for interoperability demonstration, as well as a list of vc-api implementers, notably the univerifier from danubtech.
@onthebreeze @nissimsan if you haven't come across the EECC vc-verifier I strongly recommend to have a look. It's based on libraries from DigitalBazaar, authors of the VC/DID specifications. It's an open source project that could be forked and leveraged by the UNTP spec as it fits the model very well.
It provides a UI where you input a link to a publicly hosted credential and it will verify it, adding functionalities such as downloading it as a pdf or json.
They have an example of a product passport credential
They also have an interesting feature where the public credential can have redacted components, and a viewer can authenticate through an OIDC4VP invitation to reveal additional attributes.
I would like to add a note about backwards compatibility. While listing VCDM 2.0 as requirement is a reasonable choice, it's still only moved to candidate recommendation in February. There are most likely already credentials out there using the VCDM 1.1 which could be leveraged in this work. Something to keep in mind...
Some proposed business requirements for Verifiable Credentials technology recommendations.
An important principle in all this is probably that we should fairly specific in recommendations for issuing whilst accomodating greater variation for verification - so that we drive consistency and maximise interoperability at the same time. So for example we probably would not recommend issuing any UNTP credentials as ISO mDL - but a discovered verification graph may well include some mDL identity credentials that are important to verify.
Steve,
What about interoperability? And to provide assurance across ecosystems.
A question on very long lived VCs, particularly if the DID is non-blockchain based, what happens if the issuing identifier is no longer available (company out of business…)? e.g. did:web used, but the website is no longer available. Does this push a recommendation to use persistent storage means?
On Apr 5, 2024, at 1:36 AM, Steven Capell @.***> wrote:
Some proposed business requirements for Verifiable Credentials technology recommendations.
VC technology recommendations must support tamper detection, issuer identity verification, and credential revocation so that verifiers can be confident of the integrity of UNTP credentials. (This one is a just a statement of the obvious). VC technology recommendations for issuing UNTP credentials should be as narrow as practical and should align with the most ubiquitous global technology choices so that technical interoperability is achieved with minimal cost. VC technology recommendations should support backwards compatibility so that credentials issued in support of long-lived goods such as EV batteries or construction products can still be verified years after issue. VC technology recommendations must support both human readable and machine readable credentials so that uptake in the supply chain is not blocked by actors with lower technical maturity. VC technology recommendations must support the discovery and verification of credentials from product identifiers so that verifiers need not have any a-priori knowledge of or relationship to either the issuers or the subjects of credentials. Inanimate objects do not create verifiable presentations. VC technology recommendations must support the use of linked data so that data from multiple independent credentials can be aggregated into a verifiable graph that represents the end-to-end supply chain. VC technology recommendations should value performance so that graphs containing hundreds of credentials can be traversed and verified efficiently. VC technology recommendations must meet any regulatory requirements that apply in the countries in which credentials are issued or verified. VC technology recommendations should support the capability for any supply chain actor to redact data in any credential without impacting the cryptographic integrity of the credential so that actors can hide any information they deem to be commercial sensitive. what else? An important principle in all this is probably that we should fairly specific in recommendations for issuing whilst accomodating greater variation for verification - so that we drive consistency and maximise interoperability at the same time. So for example we probably would not recommend issuing any UNTP credentials as ISO mDL - but a discovered verification graph may well include some mDL identity credentials that are important to verify.
— Reply to this email directly, view it on GitHub https://github.com/uncefact/spec-untp/issues/31#issuecomment-2038449500, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABOF6SADCHWSGUDEHFNVXCDY3XWZDAVCNFSM6AAAAABEWFNDJSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZYGQ2DSNJQGA. You are receiving this because you were mentioned.
Thanks for this initial list Steve.
I've provided comments below and suggestions for additional requirements. They obviously need to be grouped and structured a bit better (by interactions, governance, operations and change, non-functional, commercial) so that we can ensure that we've got sensible coverage. We need to develop these as input into the selection process for technical approaches, standards and protocols.
Should these be their own PR?
Cheers, Jo
VC technology recommendations must support tamper detection, issuer identity verification, and credential revocation so that verifiers can be confident of the integrity of UNTP credentials. (This one is a just a statement of the obvious). - Good. These need to be separate requirements
VC technology recommendations for issuing UNTP credentials should be as narrow as practical and should align with the most ubiquitous global technology choices so that technical interoperability is achieved with minimal cost. - The ability to utilise APPROPRIATE existing and evolving global standard protocols and data standards is a good intention, but what makes them appropriate is the alignment of these to the other requirements (adoption, performance, cost, dependencies etc.). The type of capabilities would include - decentralised activities, long-lived credentials, multi-credential presentation, selective disclosure, low cost, resilient, (appropriately) performant, open, adoptable, evolution capable and phased migration capable etc... The use of ubiquitous (widely used) standards and technical protocols will be important for ease of adoption - this may influence technology choice at each point in the implementation lifecycle.
VC technology recommendations should support backwards compatibility so that credentials issued in support of long-lived goods such as EV batteries or construction products can still be verified years after issue. - This needs careful consideration - certain credentials should be able to support longer lifecycles - not all will be necessarily long-lived. Backwards compatibility is an old school way of thinking. The ability to continue to verify older versioned credentials should be possible. If enhancements are required, new or additional credentials should be able to be issued and included in multi-credential presentations.
VC technology recommendations must support both human readable and machine readable credentials so that uptake in the supply chain is not blocked by actors with lower technical maturity. - sufficiently trustworthy human readable [you can't see cryptography, technically verifiable, and digitally native credential versions. Note that we should minimise the dependency on service providers and maximise the potential for solutions to be provided by different service providers (based on common requirements and data and protocol standards)
VC technology recommendations must support the discovery and verification of credentials from product identifiers so that verifiers need not have any a-priori knowledge of or relationship to either the issuers or the subjects of credentials. Inanimate objects do not create verifiable presentations. - standard interaction mechanisms must be able to be determined at the point of interaction (OOB interactions, issuance, presentation)
VC technology recommendations must support the use of linked data so that data from multiple independent credentials can be aggregated into a verifiable graph that represents the end-to-end supply chain.- linked data implies the ability to enhance credentials as required for usage and governance reasons (language, etc.). The ability to combine credentials from different sources and provided at different points in the supply chain is something different
VC technology recommendations should value performance so that graphs containing hundreds of credentials can be traversed and verified efficiently. - different activities will have different performance requirements. Those activities that require the use of high volumes of credential use have to be cost and performance efficient. High performance requirements implies higher degrees of decentralisation and autonomic actions (not requiring centralised or provided services)
VC technology recommendations must meet any regulatory requirements that apply in the countries in which credentials are issued or verified. - this is a good requirement, but it's a bit broad. (a) The credential must be issued under a defined jurisdiction. (b) Governance of the SC ecosystems must support the ability to present credentials across jurisdiction boundaries and (c) verifiers must be able to determine the equivalence of credentials issued in different jurisdictions and the impact of these on use of the credentials outside of their initiating jurisdiction.
VC technology recommendations should support the capability for any supply chain actor to redact data in any credential without impacting the cryptographic integrity of the credential so that actors can hide any information they deem to be commercial sensitive. - (a) access to claims within a credential must be able to be restricted - not necessarily redacted, but not allowed to be requested - EU uses trust lists (b) secondary credentials may be added to the supply chain process by trusted (anchor) intermediaries, to simplify the subsequent supply chain activities
what else?
An important principle in all this is probably that we should fairly specific in recommendations for issuing whilst accomodating greater variation for verification - so that we drive consistency and maximise interoperability at the same time. So for example we probably would not recommend issuing any UNTP credentials as ISO mDL - but a discovered verification graph may well include some mDL identity credentials that are important to verify. - Correct - this implies presentation logic interoperability, not protocol interoperability or data uniformity Jo Spencer Co-Founder | Digital Trust Evangelist | Sezoo Partner | Payments | Banking | Architecture | 460degrees
M: +61 (0) 433 774 729 E: @.*** L: https://www.linkedin.com/in/jospencer-1pg/ T: https://twitter.com/spencerjed
Sezoo acknowledges the Traditional Owners of the country throughout Australia and their continuing connection to land, sea and community. We pay our respects to them, their cultures and to Elders past, present and emerging.
On Fri, 5 Apr 2024 at 10:36, Steven Capell @.***> wrote:
Some proposed business requirements for Verifiable Credentials technology recommendations.
- VC technology recommendations must support tamper detection, issuer identity verification, and credential revocation so that verifiers can be confident of the integrity of UNTP credentials. (This one is a just a statement of the obvious).
- VC technology recommendations for issuing UNTP credentials should be as narrow as practical and should align with the most ubiquitous global technology choices so that technical interoperability is achieved with minimal cost.
- VC technology recommendations should support backwards compatibility so that credentials issued in support of long-lived goods such as EV batteries or construction products can still be verified years after issue.
- VC technology recommendations must support both human readable and machine readable credentials so that uptake in the supply chain is not blocked by actors with lower technical maturity.
- VC technology recommendations must support the discovery and verification of credentials from product identifiers so that verifiers need not have any a-priori knowledge of or relationship to either the issuers or the subjects of credentials. Inanimate objects do not create verifiable presentations.
- VC technology recommendations must support the use of linked data so that data from multiple independent credentials can be aggregated into a verifiable graph that represents the end-to-end supply chain.
- VC technology recommendations should value performance so that graphs containing hundreds of credentials can be traversed and verified efficiently.
- VC technology recommendations must meet any regulatory requirements that apply in the countries in which credentials are issued or verified.
- VC technology recommendations should support the capability for any supply chain actor to redact data in any credential without impacting the cryptographic integrity of the credential so that actors can hide any information they deem to be commercial sensitive.
- what else?
An important principle in all this is probably that we should fairly specific in recommendations for issuing whilst accomodating greater variation for verification - so that we drive consistency and maximise interoperability at the same time. So for example we probably would not recommend issuing any UNTP credentials as ISO mDL - but a discovered verification graph may well include some mDL identity credentials that are important to verify.
— Reply to this email directly, view it on GitHub https://github.com/uncefact/spec-untp/issues/31#issuecomment-2038449500, or unsubscribe https://github.com/notifications/unsubscribe-auth/A243ADAHGOR7UQQHWOQYRB3Y3XWZFAVCNFSM6AAAAABEWFNDJSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZYGQ2DSNJQGA . You are receiving this because you commented.Message ID: @.***>
The section says that did:web MUST be implemented. Does this preclude any other methods being used?
Should it say:
At a minimum did:web MUST be implemented?