uncefact / spec-untp

UN Transparency Protocol
https://uncefact.github.io/spec-untp/
GNU General Public License v3.0
10 stars 9 forks source link

Verifiable Credential "Requirements" #31

Closed mxshea closed 2 months ago

mxshea commented 3 months ago

The section says that did:web MUST be implemented. Does this preclude any other methods being used?

Should it say:

At a minimum did:web MUST be implemented?

nissimsan commented 3 months ago

@mxshea , agreed. In fact, we just merged this requirement yesterday:

https://uncefact.github.io/spec-untp/docs/specification/VerifiableCredentials

"MUST implement the did:web method as an Organizational Identifier"

(The formatting sucks. That's my bad, and I can't blame you for overlooking it! We will need to get that cleaned up)

ashleythedeveloper commented 3 months ago

@nissimsan, within the requirements, it is stated that implementors:

MUST implement the enveloping proof mechanism defined in [W3C-VC-JOSE-COSE] with JOSE (Section 3.1.1)

I'm curious why we are prohibiting the use of an embedded proof, given that it is allowed in the VCDM.

ashleythedeveloper commented 3 months ago

@nissimsan, I noticed in the Normative References section that we have linked to the V2 W3C VCDM.

Will it be a requirement that implementors conform to the V2 W3C VCDM?

nissimsan commented 3 months ago

Hey @ashleythedeveloper, fair questions - cheers!

First of all, I am attempting to align around requirements which I see are gaining gravity elsewhere. So pls see it as an attempt to make the world a simpler place. The more choices we can agree on, the less ambiguity for the world -> less implementation effort and more interoperability...

Additional arguments for signing mechanism include:

PatStLouis commented 3 months ago

@nissimsan @ashleythedeveloper I'd like to make a case for data integrity proofs embedded proofs as a securing mechanism which is a W3C Candidate Recommendation Draft.

There are many active test suites available to demonstrate interoperability:

It aligns with the GS1 VC whitepaper

There are already many known implementations

It allows for parallel signatures It allows for selective disclosure

BC Gov is evaluating DI proofs as a bridge for their existing VC infrastructure and the UNTP specification and we are interested in participating in a first implementation demo for the BCMinesActPermit.

OR13 commented 3 months ago

I don't believe data integrity proofs are a good choice for securing complex trade data (or really any data, since they secure transformations of media type serializations, they do not secure the bytes or data directly).

See: https://tess.oconnor.cx/2023/09/polyglots-and-interoperability

Data integrity proofs that use https://www.w3.org/TR/rdf-canon/ are trivial to exploit.

At best this leads to variable runtime for verifications, where the runtime increases with the input size of the data... up to failing due to the computation exceeding a timeout. At worst, its a a denial of service vulnerability or additional JSON Schema based error handling cases, see: https://www.w3.org/TR/rdf-canon/#dataset-poisoning

It allows for parallel signatures

This increases verifier burden, and created interoperability issues... which signature matters? How many signatures is too many? (COSE and JOSE & CMS have always supported parallel signatures, see https://datatracker.ietf.org/meeting/119/materials/slides-119-pquip-composite-vs-parallel-signatures-comparison-00)

It allows for selective disclosure

Is selective disclosure a requirement? Have you measured the performance trade offs between data integrity proofs and the alternatives being developed, such as ISO mDOC and SD-JWT?

It does not take but a few minutes to convince yourself there are more performant alternatives available

Screenshot 2024-04-02 at 9 22 59 AM

Even if you plan to process the data model in RDF (as triples) using Data Integrity Proofs is a mistake, since it does not actually secure the JSON-LD bytes.

I'm not surprised to see BC Gov evaluating data integrity proofs because of the desire to deploy anonymous credentials. Data integrity proofs do provide a straightforward way, to start using and testing experimental cryptography without waiting for code points or standardization (from NIST / CFRG / IETF).

At some point, experiments become defacto standards, and perhaps data integrity proofs will "win" the market share of digital credentials... and replace JWTs as the dominant credential format for identity and access use cases.

Until then, I'd recommend using JWT to secure JSON (JSON-LD if you must), and conservative (well supported) cryptography such as ES256.

dlongley commented 3 months ago

Since the above links to a graph in a repository of mine, I feel compelled to note that the above description and arguments are misleading. I'm sorry to say, but it would also appear that none of the counter arguments or perspectives have been offered despite the author being aware of many of them. The reader is expected to dismiss something essentially in "a few minutes". A careful reader should take note of these things.

I consider the above to be a form of the Gish Gallop rhetorical method -- and given how often the above is deployed in the service of pushing a particular approach, there is simply not enough time for anyone to be responsive to it in all of its various places and forms. All one can sensibly do is call it out by its contours.

I wish this community the best in making a well-informed decision. This includes considering the trade offs in full; such trade offs will always exist with technologies that take considerably different approaches and offer different features.

onthebreeze commented 2 months ago

This thread is getting interesting. I have four comments.

  1. This community is largely focussed on business standards (eg product passport schema) rather than technical ones and will generally prefer to point to existing technical references - and probably wont pick winners. Having said that, at some point we have to do on-the-wire pilots and of course will need to agree on some interoperable technical choices. We will seek consensus from a representative set of technical experts for that purpose.
  2. Whilst the integrity and security of cryptographic proofs is of course important, it is only a part of the trust problem in supply chain transparency assessments. In most cases, a single VC like a digital product passport, no matter which cryptographic proofs are used, doesn't answer questions like "is the issuer of this passport really who they say they are?", or "is the issuer of this passport really the owner of the product identifiers listed?", or "are the sustainability claims (eg carbon intensity) in this passport really true?". Answering these kind of questions requires assessment of a linked data graph derived from multiple VCs. For example an identity VC from a trust anchor like a government authority that says "the owner of this did:abc:123 is also this registered business ABN 123456" will allow a verifier of a passport issued by did:abc:123 to know that it really comes from well known business ABN 123456. Also, a product conformity claim in a passport is just an ambit claim by the issuer. It becomes trusted not when the cryptographic proof is verified (that only confirms no tampering) but when it can be verifiably linked to a conformity assessment VC from an accredited certifier. But that requires the verifier to match contexts across VCs - because the conformity assessment VC may be valid but might be about something different. Customs authorities are already beginning to realise that digital product passport could be to cargo clearance what personal passports are to traveller clearance - but not without the related identity confidence from linked credentials because that's the only way to know confidently "who packed the box?". All this goes to say that we consider linked data (whether JSON-LD or RDF or whatever) as critical because we need to construct and verify graphs of data that cross multiple VCs. I know there is some discussion in VC communities about JSON vs JSON-LD and the overhead of LD. Personally I don't care which technology is best so long as it works to construct verifiable graphs across multiple credentials. At present that seems to lead to this community towards mandating JSON-LD.
  3. A key business requirement in supply chain transparency is for verifiers who have no relationship with the issuer to be able to discover and verify product conformity claims. The verifier could be two or three steps in the value chain away from the issuer (eg an EU seller of batteries made in China verifying the operating permit of the copper mine in Canada that is discovered by following the value chain traceability thread). The verification could also be years after issuing (eg a recycler receives a battery after several years of use - or a building audit is looking at construction steel certificates decades after construction). This requirement leads us to a key design principle that "if you have the ID of a product then you can get verifiable data about the product" - even years later and even if you don't know the issuer. So it's all about discovery of digital credentials from inanimate objects. And inanimate objects don't make verifiable presentations - it's just a box with a barcode. This goes back to how we have to address integrity challenges - more about verifiable linked data graphs than about holder bindings in VPs.
  4. Selective disclosure in the usual sense (eg individuals selectively disclosing age from a drivers license when buying alcohol) is not really a requirement in supply chain transparency - at least we haven't found a requirement yet. However it does appear to be possible that a selective redaction method might be needed to address a rather different requirement. The requirement in this case if for a buyer of goods (eg certified organic cotton) to pass the certificate received from their supplier on to their customers but to redact the name of the supplier. Basically whenever verifiable data moves beyond the typical "one up one down" value chain relationship then there may be a need to redact it. This means that anyone (ie neither the issuer nor the subject) who is given or discovers a VC may choose to redact it. A merkel / hash-tree approach appears to be the simplest and most performant approach to this problem. All this goes to say that I'm not sure the performance discrepancy presented in the earlier comment is relevant. Any tech performance comparisons would need to use linked data in both (for reasons given above) and should compare selective redaction methods that meet the business requirements.

I'd strongly encourage input from people with deeper technical expertise than myself - but any technical comparisons must be between solution alternatives that meet the business needs described.

Either way, we will be open to all technical solutions that meet requirements including IETF stuff as well as W3C stuff and anaoncreds stuff - and will have an opportunity to road test them during pilots. So unless this argument is decisively resolved before pilots, we'll just design the pilots to test each proposed solution and let the results inform our recommendations.

nissimsan commented 2 months ago

@onthebreeze,

  1. Not picking winners sounds good, but only pushes these choices downstream to whoever is looking to us to make recommendations. The article on polyglots argues this point well. It would be easy for us not to choose. I'm victim myself, suggesting we take the easy route. But more ambiguity leads to less interoperability.
  2. You are conflating payload with proofing method. This is not a discussion about support for JSON-LD, but whether the signature is applied at the JSON-level or the expanded LD graph-level.
  3. +1.
  4. We agreed two meetings ago to descope selective disclosure/redaction from UNTP requirements, so let's not surface that as arguments. Also, I don't see how that ties back to performance considerations which IMO should be a factor for us, e.g. UN sustainability goal 12 I referenced earlier.
OR13 commented 2 months ago

So unless this argument is decisively resolved before pilots, we'll just design the pilots to test each proposed solution and let the results inform our recommendations.

I think this is a fine strategy, just be careful to design the test criteria to measure and confirm or dismiss the correct hypothesis.

PatStLouis commented 2 months ago

@OR13 thanks for providing more information behind the proposal

These requirements were pulled in from the DHS SVIP/CBP contract work without much discussion (at least none that I could trace back to). Will the UNTP be an extension of the CBP work or should it specify it's own set of requirements catered to it's architecture design?

My issue with cutting so early with strong requirements lies with a solution first approach with little consideration for the problem to solve. However engaging early in arguments for each suggestions is good to lay out the options/opinions.

The embedded vs enveloped discussion is an area of strong disagreement within the w3c community and it's not a decision to make without consideration as it involves more that just the signature. It will affect how entities communicate with each other and how data will travel. It's still not clear to me how DPP will move around. I feel it will be a very different data exchange model between entities while the supply-chain is being built vs. when it will be discovered after the fact (transforming the product vs. discovering how the product was transformed). Not everything can rely 100% on discoverability.

My arguments are mostly around the Conformity Credentials since this is where I will put most of my attention towards.

There are very few implementers involved at the moment and even less so technical implementers. BC Gov just went through a round of funding to develop a w3c VC data model bridge catered to the anoncreds specification, led by DSR Corp and other reputable organizations. They are now looking to be amongst the first technical implementers of this specification as an authority mostly around the conformity credential for permits, leases and licences (and maybe sustainability claims down the line). This is a valuable contribution and entity to have this early in the prototyping.

While interesting, the ietf scitt architecture model seems very different than what I understand this specification to envision. There is a strong focus on discoverability in the UNTP design instead of direct exchanges between partners. Was this formerly addressed?

For the argument around tooling, with the reasoning that we should defacto chose whichever technology has more existing support is an anti innovative approach. Creating new tools is only a matter of time and investment. Whatever outcome from this work, BC Gov will make it's contributions available where applicable as open source software through the aries cloudagent python project which could become an ideal functional agent for future implementers. Aca-py already demonstrated interoperability with the vc-api as an issuer/verifier and an open sourced controller has been made demonstrating traceability interop, notably covering statuslists. Aca-py was also shortlisted and highly commanded as an Open Source digital government identity solution by the UNDP last February.

Mandating JWT instead of JSON-LD for the VC model seems odd when hosting public credentials at an endpoint for discoverability, which is how conformity credentials will likely be handled.

AFAIK selective redaction is still mentioned in the confidentiality section.

Glad this can stimulate interesting conversations.

OR13 commented 2 months ago

COSE and JOSE can both be used to secure arbitrary content types.

Since you mentioned SCITT, that IETF WG chose to focus on COSE, so for example a JWT can only be submitted to a transparency service conforming to SCITT if it is wrapped in a COSE Sign1.

This is similar to how JSON-LD application/vc+ld+json can be signed with JWS or COSE Sign1.

ISO mDoc also uses COSE Sign1.

In order of interest considering innovation, security and performance (size and compute time), I would recommend the following for JSON based credential formats:

JWT... And when they are read: SD-JWT,JWP

For COSE / CBOR based claimsets:

CWT... And when they are ready: SD-CWT,CWP

There are other ways to sign json, you could use PGP, SSH signatures, DSS, DER encoded raw signatures, etc....

CFRG and JOSE are working on BBS proofs for JSON and CBOR.

The german government has developed repudiable signatures based on key agreement, to reduce the damage of stolen credentials that might have been based on non repudiable signatures.

It's not reasonable to ask verifiers to support all these mechanism, and the reality is that some of these schemes have very few independent implementations, or industry penetration, making them poor choices to recommend, if the goal is to make it easy to generate conformant credentials.

It's true, you can define semantic models, and if regulators are convinced of the value of the model, they can mandate a specific serialization and securing mechanism, to try to ensure interoperability and make adoption easy.

That's essentially how ISO mDoc works, instead of JSON-LD, it uses ISO namespaced claims.

The UN could do the same, and if everyone agreed to use the UN vocabulary, a lot of global supply chain security modeling issues might be addressed... Or the model could become a source of frustration due to the choices made and their lack of industry support despite being mandated.

The choices you make when designing a profile, determine how much value it can provide, and also who can help you deliver that value at scale.

It's important to make sure you have sufficient buy in from implementers.

By making choices, you attract support from people who want to work on a particular technology stack and you repel people who don't want to work on a particular stack.

I don't know what the right choices are for this group, but I've got opinions about what technologies I want to work on.

FWIW, I think the traceability work has outgrown it's incubation period in the W3C CCG, and I would love to see it shut down, and the parts that are actually useful move to an organization that can properly support an international view of supply chain risk management.

onthebreeze commented 2 months ago

@onthebreeze,

  1. Not picking winners sounds good, but only pushes these choices downstream to whoever is looking to us to make recommendations. The article on polyglots argues this point well. It would be easy for us not to choose. I'm victim myself, suggesting we take the easy route. But more ambiguity leads to less interoperability.
  2. You are conflating payload with proofing method. This is not a discussion about support for JSON-LD, but whether the signature is applied at the JSON-level or the expanded LD graph-level.
  3. +1.
  4. We agreed two meetings ago to descope selective disclosure/redaction from UNTP requirements, so let's not surface that as arguments. Also, I don't see how that ties back to performance considerations which IMO should be a factor for us, e.g. UN sustainability goal 12 I referenced earlier.

Thanks @nissimsan - agree that it ambiguity challenges interop. (1) Lets aim to be as specific as possible but where we are making choices between options then make sure our recommendations are informed by evidence from testing. (2) Noted about JSON-LD signatures. (3) - cool. (4) Regarding selective redactions, yep, it's parked for now. Only mentioned it because I thought maybe that performance graph included some selective disclosure methods.

JoOnGT commented 2 months ago

My suggestion is that the protocol definition should be as decentralised as possible, but not necessarily aligned to any one higher level protocol (KERI, OIDC4VC, DIDCom). With that in mind, I would suggest that this initiative should test out the Trust Spanning Protocol which has been designed under the TrustOverIP foundation. The first version of the protocol is soon to be in implementation draft.

Locking in any one set of VC definitions, VIDs (verifiable identifiers) and business level protocol isn't a good idea at this stage IMHO. However, we would look to test this with an initial set of agreed technologies. A lot of the options have been identified in the threads above.

nissimsan commented 2 months ago

Discussed on the call. Next step actions to close this issue:

PatStLouis commented 2 months ago

Unfortunately the Thursday call falls around the 3am mark for me so I'll be unable to attend those.

For BC's participation in a first round of pilot implementation, we will be looking at the following: Publish a signed BCMinesActPermitCredential

The issuing platform will aim to be compliant with the following test-suites:

We will leverage the EECC vc-verifier project for interoperability demonstration, as well as a list of vc-api implementers, notably the univerifier from danubtech.

@onthebreeze @nissimsan if you haven't come across the EECC vc-verifier I strongly recommend to have a look. It's based on libraries from DigitalBazaar, authors of the VC/DID specifications. It's an open source project that could be forked and leveraged by the UNTP spec as it fits the model very well.

It provides a UI where you input a link to a publicly hosted credential and it will verify it, adding functionalities such as downloading it as a pdf or json.

They have an example of a product passport credential

They also have an interesting feature where the public credential can have redacted components, and a viewer can authenticate through an OIDC4VP invitation to reveal additional attributes.

I would like to add a note about backwards compatibility. While listing VCDM 2.0 as requirement is a reasonable choice, it's still only moved to candidate recommendation in February. There are most likely already credentials out there using the VCDM 1.1 which could be leveraged in this work. Something to keep in mind...

onthebreeze commented 2 months ago

Some proposed business requirements for Verifiable Credentials technology recommendations.

  1. VC technology recommendations must support tamper detection, issuer identity verification, and credential revocation so that verifiers can be confident of the integrity of UNTP credentials. (This one is a just a statement of the obvious).
  2. VC technology recommendations for issuing UNTP credentials should be as narrow as practical and should align with the most ubiquitous global technology choices so that technical interoperability is achieved with minimal cost.
  3. VC technology recommendations should support backwards compatibility so that credentials issued in support of long-lived goods such as EV batteries or construction products can still be verified years after issue.
  4. VC technology recommendations must support both human readable and machine readable credentials so that uptake in the supply chain is not blocked by actors with lower technical maturity.
  5. VC technology recommendations must support the discovery and verification of credentials from product identifiers so that verifiers need not have any a-priori knowledge of or relationship to either the issuers or the subjects of credentials. Inanimate objects do not create verifiable presentations.
  6. VC technology recommendations must support the use of linked data so that data from multiple independent credentials can be aggregated into a verifiable graph that represents the end-to-end supply chain.
  7. VC technology recommendations should value performance so that graphs containing hundreds of credentials can be traversed and verified efficiently.
  8. VC technology recommendations must meet any regulatory requirements that apply in the countries in which credentials are issued or verified.
  9. VC technology recommendations should support the capability for any supply chain actor to redact data in any credential without impacting the cryptographic integrity of the credential so that actors can hide any information they deem to be commercial sensitive.
  10. what else?

An important principle in all this is probably that we should fairly specific in recommendations for issuing whilst accomodating greater variation for verification - so that we drive consistency and maximise interoperability at the same time. So for example we probably would not recommend issuing any UNTP credentials as ISO mDL - but a discovered verification graph may well include some mDL identity credentials that are important to verify.

mxshea commented 2 months ago

Steve,

What about interoperability? And to provide assurance across ecosystems.

A question on very long lived VCs, particularly if the DID is non-blockchain based, what happens if the issuing identifier is no longer available (company out of business…)? e.g. did:web used, but the website is no longer available. Does this push a recommendation to use persistent storage means?

On Apr 5, 2024, at 1:36 AM, Steven Capell @.***> wrote:

Some proposed business requirements for Verifiable Credentials technology recommendations.

VC technology recommendations must support tamper detection, issuer identity verification, and credential revocation so that verifiers can be confident of the integrity of UNTP credentials. (This one is a just a statement of the obvious). VC technology recommendations for issuing UNTP credentials should be as narrow as practical and should align with the most ubiquitous global technology choices so that technical interoperability is achieved with minimal cost. VC technology recommendations should support backwards compatibility so that credentials issued in support of long-lived goods such as EV batteries or construction products can still be verified years after issue. VC technology recommendations must support both human readable and machine readable credentials so that uptake in the supply chain is not blocked by actors with lower technical maturity. VC technology recommendations must support the discovery and verification of credentials from product identifiers so that verifiers need not have any a-priori knowledge of or relationship to either the issuers or the subjects of credentials. Inanimate objects do not create verifiable presentations. VC technology recommendations must support the use of linked data so that data from multiple independent credentials can be aggregated into a verifiable graph that represents the end-to-end supply chain. VC technology recommendations should value performance so that graphs containing hundreds of credentials can be traversed and verified efficiently. VC technology recommendations must meet any regulatory requirements that apply in the countries in which credentials are issued or verified. VC technology recommendations should support the capability for any supply chain actor to redact data in any credential without impacting the cryptographic integrity of the credential so that actors can hide any information they deem to be commercial sensitive. what else? An important principle in all this is probably that we should fairly specific in recommendations for issuing whilst accomodating greater variation for verification - so that we drive consistency and maximise interoperability at the same time. So for example we probably would not recommend issuing any UNTP credentials as ISO mDL - but a discovered verification graph may well include some mDL identity credentials that are important to verify.

— Reply to this email directly, view it on GitHub https://github.com/uncefact/spec-untp/issues/31#issuecomment-2038449500, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABOF6SADCHWSGUDEHFNVXCDY3XWZDAVCNFSM6AAAAABEWFNDJSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZYGQ2DSNJQGA. You are receiving this because you were mentioned.

JoOnGT commented 2 months ago

Thanks for this initial list Steve.

I've provided comments below and suggestions for additional requirements. They obviously need to be grouped and structured a bit better (by interactions, governance, operations and change, non-functional, commercial) so that we can ensure that we've got sensible coverage. We need to develop these as input into the selection process for technical approaches, standards and protocols.

Should these be their own PR?

Cheers, Jo

  1. VC technology recommendations must support tamper detection, issuer identity verification, and credential revocation so that verifiers can be confident of the integrity of UNTP credentials. (This one is a just a statement of the obvious). - Good. These need to be separate requirements

  2. VC technology recommendations for issuing UNTP credentials should be as narrow as practical and should align with the most ubiquitous global technology choices so that technical interoperability is achieved with minimal cost. - The ability to utilise APPROPRIATE existing and evolving global standard protocols and data standards is a good intention, but what makes them appropriate is the alignment of these to the other requirements (adoption, performance, cost, dependencies etc.). The type of capabilities would include - decentralised activities, long-lived credentials, multi-credential presentation, selective disclosure, low cost, resilient, (appropriately) performant, open, adoptable, evolution capable and phased migration capable etc... The use of ubiquitous (widely used) standards and technical protocols will be important for ease of adoption - this may influence technology choice at each point in the implementation lifecycle.

  3. VC technology recommendations should support backwards compatibility so that credentials issued in support of long-lived goods such as EV batteries or construction products can still be verified years after issue. - This needs careful consideration - certain credentials should be able to support longer lifecycles - not all will be necessarily long-lived. Backwards compatibility is an old school way of thinking. The ability to continue to verify older versioned credentials should be possible. If enhancements are required, new or additional credentials should be able to be issued and included in multi-credential presentations.

  4. VC technology recommendations must support both human readable and machine readable credentials so that uptake in the supply chain is not blocked by actors with lower technical maturity. - sufficiently trustworthy human readable [you can't see cryptography, technically verifiable, and digitally native credential versions. Note that we should minimise the dependency on service providers and maximise the potential for solutions to be provided by different service providers (based on common requirements and data and protocol standards)

  5. VC technology recommendations must support the discovery and verification of credentials from product identifiers so that verifiers need not have any a-priori knowledge of or relationship to either the issuers or the subjects of credentials. Inanimate objects do not create verifiable presentations. - standard interaction mechanisms must be able to be determined at the point of interaction (OOB interactions, issuance, presentation)

  6. VC technology recommendations must support the use of linked data so that data from multiple independent credentials can be aggregated into a verifiable graph that represents the end-to-end supply chain.- linked data implies the ability to enhance credentials as required for usage and governance reasons (language, etc.). The ability to combine credentials from different sources and provided at different points in the supply chain is something different

  7. VC technology recommendations should value performance so that graphs containing hundreds of credentials can be traversed and verified efficiently. - different activities will have different performance requirements. Those activities that require the use of high volumes of credential use have to be cost and performance efficient. High performance requirements implies higher degrees of decentralisation and autonomic actions (not requiring centralised or provided services)

  8. VC technology recommendations must meet any regulatory requirements that apply in the countries in which credentials are issued or verified. - this is a good requirement, but it's a bit broad. (a) The credential must be issued under a defined jurisdiction. (b) Governance of the SC ecosystems must support the ability to present credentials across jurisdiction boundaries and (c) verifiers must be able to determine the equivalence of credentials issued in different jurisdictions and the impact of these on use of the credentials outside of their initiating jurisdiction.

  9. VC technology recommendations should support the capability for any supply chain actor to redact data in any credential without impacting the cryptographic integrity of the credential so that actors can hide any information they deem to be commercial sensitive. - (a) access to claims within a credential must be able to be restricted - not necessarily redacted, but not allowed to be requested - EU uses trust lists (b) secondary credentials may be added to the supply chain process by trusted (anchor) intermediaries, to simplify the subsequent supply chain activities

  10. what else?

    1. verifiable (digital) credentials should replicate / emulate existing physical credentials and should not look to create other (secondary / service) actors unnecessarily
    2. credentials of different types (data, presentation protocols) should be able to be requested as part of a combined presentation activity, with as little dependency between underlying protocols and credential definitions
    3. verification (business and technical) logic should ideally (must) be defined and governed, accessible to verifiers
    4. presentation logic should be minimised in the Holder actor agent
    5. additional organisation identity and delegate relationships verification using credential must be able to be added as part of the presentation and verification logic.
    6. open source definitions must allow the design and build of solutions by a large number of solution providers - this implies that test harnesses and accreditation of new solutions must be possible (so that interoperability process testing must be possible)
    7. (a) verifiers in the supply chain may adopt updated versions of the data structure and protocols without all having to demand that all verifiers are able to consume these at the same rate. (b) Issuers must be able to issue a defined number of credential versions across defined protocols at any one time.
    8. evolutions in data encoding and key management must be able to be adopted in conjunction with existing protocols and data standards.

An important principle in all this is probably that we should fairly specific in recommendations for issuing whilst accomodating greater variation for verification - so that we drive consistency and maximise interoperability at the same time. So for example we probably would not recommend issuing any UNTP credentials as ISO mDL - but a discovered verification graph may well include some mDL identity credentials that are important to verify. - Correct - this implies presentation logic interoperability, not protocol interoperability or data uniformity Jo Spencer Co-Founder | Digital Trust Evangelist | Sezoo Partner | Payments | Banking | Architecture | 460degrees

M: +61 (0) 433 774 729 E: @.*** L: https://www.linkedin.com/in/jospencer-1pg/ T: https://twitter.com/spencerjed

Sezoo acknowledges the Traditional Owners of the country throughout Australia and their continuing connection to land, sea and community. We pay our respects to them, their cultures and to Elders past, present and emerging.

On Fri, 5 Apr 2024 at 10:36, Steven Capell @.***> wrote:

Some proposed business requirements for Verifiable Credentials technology recommendations.

  1. VC technology recommendations must support tamper detection, issuer identity verification, and credential revocation so that verifiers can be confident of the integrity of UNTP credentials. (This one is a just a statement of the obvious).
  2. VC technology recommendations for issuing UNTP credentials should be as narrow as practical and should align with the most ubiquitous global technology choices so that technical interoperability is achieved with minimal cost.
  3. VC technology recommendations should support backwards compatibility so that credentials issued in support of long-lived goods such as EV batteries or construction products can still be verified years after issue.
  4. VC technology recommendations must support both human readable and machine readable credentials so that uptake in the supply chain is not blocked by actors with lower technical maturity.
  5. VC technology recommendations must support the discovery and verification of credentials from product identifiers so that verifiers need not have any a-priori knowledge of or relationship to either the issuers or the subjects of credentials. Inanimate objects do not create verifiable presentations.
  6. VC technology recommendations must support the use of linked data so that data from multiple independent credentials can be aggregated into a verifiable graph that represents the end-to-end supply chain.
  7. VC technology recommendations should value performance so that graphs containing hundreds of credentials can be traversed and verified efficiently.
  8. VC technology recommendations must meet any regulatory requirements that apply in the countries in which credentials are issued or verified.
  9. VC technology recommendations should support the capability for any supply chain actor to redact data in any credential without impacting the cryptographic integrity of the credential so that actors can hide any information they deem to be commercial sensitive.
  10. what else?

An important principle in all this is probably that we should fairly specific in recommendations for issuing whilst accomodating greater variation for verification - so that we drive consistency and maximise interoperability at the same time. So for example we probably would not recommend issuing any UNTP credentials as ISO mDL - but a discovered verification graph may well include some mDL identity credentials that are important to verify.

— Reply to this email directly, view it on GitHub https://github.com/uncefact/spec-untp/issues/31#issuecomment-2038449500, or unsubscribe https://github.com/notifications/unsubscribe-auth/A243ADAHGOR7UQQHWOQYRB3Y3XWZFAVCNFSM6AAAAABEWFNDJSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZYGQ2DSNJQGA . You are receiving this because you commented.Message ID: @.***>