Closed David-Chadwick closed 1 year ago
Note: ISO 18013-5 the meaning of "mandatory" in the data structure is that the mDL app must have that data element in its storage area (i.e. issuers are required to include that data element when issuing an mDL). It does not apply to data element request/response with a verifier. Your question is probably sound, but the example is not.
On Tue, Jul 5, 2022 at 8:05 AM David Chadwick @.***> wrote:
There are many different ways of implementing selective disclosure. Some send the whole credential with blinded property names and values, others send atomic credentials, others send assertions and proofs that the assertions are correct etc. If the verifier receives a selectively disclosed credential which has a credentialSchema property in it, in which some properties are said to be mandatory and some are optional (e.g. the ISO mDL specifies 11 mandatory attributes and 22 optional ones) but the verifier only requests a subset of these properties, and not all the mandatory ones (e.g. asking for date of birth from a driving license), then how should the credentialSchema property be utilised by the verifier, given that the received credential clearly does not match the credentialSchema as it is missing some mandatory attributes? I think we need to add some clarifying text to the data model to address this issue, because currently the DM states "data schemas that provide verifiers https://www.w3.org/TR/vc-data-model/#dfn-verifier with enough information to determine if the provided data conforms to the provided schema."
— Reply to this email directly, view it on GitHub https://github.com/w3c/vc-data-model/issues/890, or unsubscribe https://github.com/notifications/unsubscribe-auth/AETAZ7C2IKYEGASZYJO6HJ3VSRFMLANCNFSM52WRMZYA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
As I understand mDL it comprises a set of random numbers for both property types and values (from the perspective of the verifier) therefore it is not possible to determine what it contains. So the question still remains, if the verifier only has a subset of properties revealed to it, what are the rules (and description that we should insert into the DM) to describe how the verifier determines if the presented VC matches the credentialSchema?
At the meeting on 31 Aug the discussion indicated that another property "presentationSchema" would be more valuable to tell the verifier what properties must and may be in the selectively disclosed credential. Thus the verification process would tell the verifier whether the presented credential conformed to the presentationSchema or not, and then the verifier could determine whether this was valid or not for their business use case. e.g. a verifier can decide to accept a presented credential that failed the verification, or can decide to reject a credential that has passed the verification
The issue was discussed in a meeting on 2022-08-31
Note that presentationSchema for VPs is different to presentationSchema for VCs. The former (issue 839) indicates what the VP does contain, whereas the latter (this issue) says what the selectively disclosed credential must or may contain.
Here is a suggestion as a resolution of this issue. Note, I am only addressing credential verification and not credential validation, as the verifier can determine its own rules as to whether a (un)verified credential is valid or not.
However I am not convinced that a discloseSchema is required providing that presented selectively disclosed credentials always must contain metadata properties such as ToU, Evidence, ExpirationDate etc. because when requesting someone's age from a driving license then the number of points is irrelevant.
First thought, I would suggest disclosureSchema
over discloseSchema
, if we pursue this direction.
Then, it seems that disclosureSchema
itself must always be included in its value, and so always disclosed, else how is the Verifier to know that the Issuer declared it? Or is disclosureSchema
only meant to force disclosure of those properties from the VC, whenever a VP includes any properties from the VC?
Next, I don't think I understand what you mean by "a list of credential properties that must always be present in presented credentials (whether selectively disclosed or not)". Does this mean that those credential properties will be included/revealed/disclosed whether or not the Holder selects them for disclosure? That would seem to fly in the face of selective disclosure, unless the Holder is at least alerted to the fact before they disclose things they've not selected.
Further, you say "the discloseSchema is used to ensure that all the properties that the issuer says must be presented have been presented (e.g. points on a driving license, or TermsOfUse)" and I disagree strongly with the idea that when I (selectively) present my driving license as proof of age that I must also present the violation points I've been assigned thereon, as those points are entirely irrelevant to this presentation of the license — as you yourself accede in your final paragraph.
Bottom line, I think this suggestion/idea needs significant refinement before it can be considered viable.
After more thought I do not think a discloseSchema is needed. Rather I think the following clarification of credentialSchema is needed for verifiers.
Whilst the credentialSchema property may be used to ensure that an issued credential is well formed, a verifier may only use it to determine that all the presented subject properties in a selectively disclosed credential are allowed to be there (e.g. a university degree credential does not contain bank account details). Any MUST be present schema directives are irrelevant to a selectively disclosed verifiable credential and MUST be ignored by the verifier.
I think this should be added as a second NOTE under the current one in clause 5.4.
Bottom line, I think this suggestion/idea needs significant refinement before it can be considered viable.
+1. different use-cases require different set of claims, which is why selective disclosure is important in the Issuer-Holder-Verifier model. The Issuer cannot predict all those use-cases and I do not understand why the Issuer would instruct the Holder to always release certain claims.
if the verifier only has a subset of properties revealed to it, what are the rules to describe how the verifier determines if the presented VC matches the credentialSchema?
Verifier uses a schema that includes the subset of the claims that is can (legally, trust framework wise, etc.) request and receive from the Holder. I do not see the need for a separate schema nor a new property
also..
As I understand mDL it comprises a set of random numbers for both property types and values (from the perspective of the verifier) therefore it is not possible to determine what it contains.
yes, claim values are hashed, but the "mandatory claims" will 100% will be included in those hashes, just list Andrew said. 18013-5's mandatory does not mean that those claims always have to be presented to the Verifier, that would be pretty privacy invading..
First thought, I would suggest disclosureSchema over discloseSchema, if we pursue this direction.
+1 (if we ever add this property)
@David-Chadwick
Whilst the credentialSchema property may be used to ensure that an issued credential is well formed, a verifier may only use it to determine that all the presented subject properties in a selectively disclosed credential are allowed to be there (e.g. a university degree credential does not contain bank account details).
You want verifiers to test whether they're allowed to receive the information that has been presented to them, and then ignore the stuff that they're not allowed to know?
This is a broken process. The cat is already out of the bag. If anything, the presenter must be prevented from including bank details in an academic credential, but I'm not sure even this is viably or generally implementable.
@TallTed What is the purpose of the credentialSchema property?
@David-Chadwick
What is the purpose of the
credentialSchema
property?
A fine question.
I didn't introduce the credentialSchema
property. Dim memory suggest you did?
If its purpose is not clear in current documents, then some research would seem to be in order, to see what purpose it was intended to serve.
You've suggested that restricted information — e.g., banking information — from a VC may be included in a selective disclosure VP, and that verifiers should check to make sure they have not received any such restricted info. THIS IS NOT VIABLE. I think it's really no different from presenting a non selective disclosure VP and telling verifiers they must discard some fields from it, which I hope you'll agree is equally non-sensical.
In my opinion credentialSchema is there to check that the credential is well formed. JSON schemas say which properties must or may be present in the credential and what their syntaxes are. So a parser can differentiate between integers, string, URLs, images etc. and know that a credential is wrong if a mandatory property is missing (such as the type). But of course this does not work with SD because mandatory properties may not be revealed. However, the credentialSchema can still tell the verifier is an alien property is present which is not specified in the credentialSchema. Why is this important you may ask. Because communities of users (aka federations) may specify certain types of credential (e.g. a COVID-19 certificates) at the community level, and thousands of issuers may issue them. A verifier needs to know if a VC received from an issue is well-formed or not. But if the VC is selectively disclosed then the verifier can only use the schema to know if alien properties are present, but not if mandatory properties are missing. I hope this helps.
@TallTed What is the purpose of the credentialSchema property?
I've been thinking about #895 and about what specifically would prevent ACDC from complying with VCDM. As @SmithSamuelM mentions in the meeting summarised here, ACDC uses composable JSON schema, so implementing ACDC to conform with the VCDM would use credentialSchema
with type JsonSchemaValidator*
.
I also thought credentialSchema
could be used in this case to "lock down" @context
to exclude any non-integrity-protected references. This led me to realise that the current spec (transitively) requires non-integrity-protected references, which I think is an issue.
Its difficult to decide where to start to voice the security concerns of a json-ld document with an @context
. Unless in every case the @context
is to be ignored, (which defeats the extensibility reasons for using json-ld in the first place). We are faced with a conundrum. To avoid being accused of creating a text wall, I will only list a couple of concerns.
1) @context
is normatively dynamic
https://www.w3.org/TR/json-ld11/#the-context
@context
provides an IRI/URL mechanism for replacing terms with external resources which are dynamic. Any compromise of the external resources results in an undetectable compromise of any VC that uses an @context
that points to those compromised resources. One could fix up @context
resolution to use a non-normative URI mechanism that forces the URIs to include an integrity hash of the resources which must be map of hashes all the way up to the @context
so that any path through the term space of the @context is always integrity hashed.
This is a non-normative mechanism for protecting @context
resources and defeats the main advantage of using @context in the first place which is dynamic extensibility. So I see no way lock down @context
unless we completely change the normative definition of json-ld.
Composition and Local (non-network location) schema identifiers are two vital (to ACDC) normative properties of JSON Schema that schema.org does not share.
Local immutable JSON-Schema are essential to schema integrity. There is no way to lock down @context in a normative way. In contrast local JSON-Schema is a normative use case. Its widely used that way and the JSON Schema spec is very clear that a JSON Schema identifier is nominally not a network location even when expressed as a URL. A schema packager may optionally use network locations but that is up to the packager. Local JSON Schema can be locked down easily by including a hash in the local schema identifier. This is how ACDC uses JSON Schema. But then there is no reason to ever use @context in a normative way.
To elaborate, JSON-LD does not normatively recognize any other schema besides schema.org. Of course we can make an exception and make it normative for W3VC but we are doing pretty invasive surgery on json-ld when we do that.
I spent the better part of a couple of days a while ago attempting to use @context in a json-ld document but populated using JSON Schema not schema.org. I could not find a single published example of how that would be accomplished. That may have changed in the last year or so. But Its very clear that the JSON-LD community and the JSON-LD spec and its associated tooling is not friendly to JSON Schema.
It is entirely nonsensical to talk about authenticity in the context of data transmitted over the internet in any other terms than cryptographically verifiable attribution to some digital identifier. To my knowledge the only practical cryptographic mechanisms for securely attributing data to a digital identifier require serialization of that data to which a verifiable cryptographic commitment is made. And any tampering of that data will break the verifiability of that commitment. So authenticity assumes integrity as a hard constraint. This means that any in-place dynamism which is indistinguishable from tampering can not allowed within the scope of a verifiable commitment. Extensibility can be had but only by chaining, or appending or otherwise adding on to previous commitments not by in-place extension.
Now it gets more complicated. The commitments to VCs must allow for multiple sources with different loci of control and different cryptographic artifacts of verifiability. So even though one can expand multiple VCs expressed as JSON-LD into a single RDF graph and create an "integrity proof" on all or part of that expansion, recreating that proof assumes one source, the entity that made the expansion, unless you segment your RDF graph into a set of graphs, one for each source because the artifacts of the original source commitments must be kept around in order to verify authenticity not merely integrity which defeats the gains of having the complex RDF integrity proofs in the first place.
There is a subtle slight-of-hand involved here. Verifiable authenticity of data in motion, is not the same as verifiable authenticity of data at rest. One can have an authentic communications channel where the data in motion has been verified as authentic prior to storing in a local database. But the holder of that database, cannot prove to a downstream user that the data in the database is authentic to the source, unless the authenticity mechanism applies to the data at rest. This means that merely proving data integrity of the data at rest is not tantamount to proving authenticity to the original source of the data now at rest.
What that all means is that we should start with immutable data objects including immutable schema to which we can attach proof of authenticity at rest and build from there.
The easiest interoperability path I see. Is to use ACDCs as an authenticity layer that conveys an opaque payload (opaque to the authenticity layer). That payload may very well be JSON-LD but only an immutable expression of a JSON-LD document. Any dynamic in-place expansion breaks strict authenticity-at-rest.
A common approach to protocol layering is to add an authorization sublayer to the authentication layer. This authorizaton sublayer would satisfy the majority of VC use cases where the VC is truly a "credential" i.e. evidence of an entitlement. Authorization is nonsensical without authentication, hence why its a sublayer. In the authorization case, the ACDC must expose the type of authorization. Forensic (enforcement) information could be opaque to the verifiability of the type of authorization and could therefore be relegated to the payload. The authentication layer, and authorization sublayers do not benefit from an open world model or do not benefit enough to justify the complexity of an open world model. And the artifacts of the auth layer can be kept around by the application layer which can add them to an open world model. But the open world is necessarily opaque to the auth layer. The dynamic open world data model should not be pushed down the stack because it then makes security very very difficult. And now we have come full circle.
TLDR Excerpts from the relevant portions of the JSON-LD spec.
[Contexts](https://www.w3.org/TR/json-ld11/#dfn-context) can either be directly embedded into the document (an [embedded context](https://www.w3.org/TR/json-ld11/#dfn-embedded-context)) or be referenced using a URL. Assuming the context document in the previous example can be retrieved at https://json-ld.org/contexts/person.jsonld, it can be referenced by adding a single line and allows a JSON-LD document to be expressed much more concisely as shown in the example below:
[EXAMPLE 5](https://www.w3.org/TR/json-ld11/#example-5-referencing-a-json-ld-context): Referencing a JSON-LD context
Compacted (Input) Expanded (Result) Statements Turtle [Open in playground](https://json-ld.org/playground/#startTab=tab-expand&json-ld=%7B%0A%20%20%22%40context%22%3A%20%22https%3A%2F%2Fjson-ld.org%2Fcontexts%2Fperson.jsonld%22%2C%0A%20%20%22name%22%3A%20%22Manu%20Sporny%22%2C%0A%20%20%22homepage%22%3A%20%22http%3A%2F%2Fmanu.sporny.org%2F%22%2C%0A%20%20%22image%22%3A%20%22http%3A%2F%2Fmanu.sporny.org%2Fimages%2Fmanu.png%22%0A%7D)
{
"@context": "https://json-ld.org/contexts/person.jsonld",
"name": "Manu Sporny",
"homepage": "http://manu.sporny.org/",
"image": "http://manu.sporny.org/images/manu.png"
}
The referenced context not only specifies how the terms map to [IRIs](https://tools.ietf.org/html/rfc3987#section-2) in the Schema.org vocabulary but also specifies that string values associated with the homepage and image property can be interpreted as an [IRI](https://tools.ietf.org/html/rfc3987#section-2) ("@type": "@id", see [§ 3.2 IRIs](https://www.w3.org/TR/json-ld11/#iris) for more details). This information allows developers to re-use each other's data without having to agree to how their data will interoperate on a site-by-site basis. External JSON-LD context documents may contain extra information located outside of the @context key, such as documentation about the [terms](https://www.w3.org/TR/json-ld11/#dfn-term) declared in the document. Information contained outside of the @context value is ignored when the document is used as an external JSON-LD context document.
A remote context may also be referenced using a relative URL, which is resolved relative to the location of the document containing the reference. For example, if a document were located at http://example.org/document.jsonld and contained a relative reference to context.jsonld, the referenced context document would be found relative at http://example.org/context.jsonld.
<h2 id="x2-conformance" style="caret-color: rgb(0, 0, 0); color: var(--heading-text); font-style: normal; font-variant-caps: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none; position: relative; break-after: avoid; break-inside: avoid; font-weight: normal; font-stretch: normal; font-size: 22.4px; font-family: sans-serif; line-height: 1.2; margin-top: 3rem; orphans: 2; widows: 2;"><bdi class="secno">2.<span class="Apple-converted-space"> </span></bdi>Conformance<a class="self-link" aria-label="§" href="https://www.w3.org/TR/json-ld11/#conformance" style="color: inherit; text-decoration-color: var(--a-visited-underline); text-decoration-skip-ink: none; border: none; font-size: 18.591999px; height: 2em; left: -1.6em; opacity: 0.5; position: absolute; text-align: center; text-decoration: none; top: 0px; transition: opacity 0.2s ease 0s; width: 2em;"></a></h2><p style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1em 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: sans-serif; orphans: 2; widows: 2;">As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.</p><p style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1em 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: sans-serif; orphans: 2; widows: 2;">The key words<span class="Apple-converted-space"> </span><em class="rfc2119">MAY</em>,<span class="Apple-converted-space"> </span><em class="rfc2119">MUST</em>,<span class="Apple-converted-space"> </span><em class="rfc2119">MUST NOT</em>,<span class="Apple-converted-space"> </span><em class="rfc2119">RECOMMENDED</em>,<span class="Apple-converted-space"> </span><em class="rfc2119">SHOULD</em>, and<span class="Apple-converted-space"> </span><em class="rfc2119">SHOULD NOT</em><span class="Apple-converted-space"> </span>in this document are to be interpreted as described in<span class="Apple-converted-space"> </span><a href="https://tools.ietf.org/html/bcp14" style="color: var(--a-visited-text); text-decoration-color: var(--a-visited-underline); text-decoration-skip-ink: none;">BCP 14</a><span class="Apple-converted-space"> </span>[<cite><a class="bibref" data-link-type="biblio" href="https://www.w3.org/TR/json-ld11/#bib-rfc2119" title="Key words for use in RFCs to Indicate Requirement Levels" style="text-decoration: none; font-style: normal; color: var(--a-visited-text); text-decoration-color: var(--a-visited-underline); text-decoration-skip-ink: none;">RFC2119</a></cite>] [<cite><a class="bibref" data-link-type="biblio" href="https://www.w3.org/TR/json-ld11/#bib-rfc8174" title="Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words" style="text-decoration: none; font-style: normal; color: var(--a-visited-text); text-decoration-color: var(--a-visited-underline); text-decoration-skip-ink: none;">RFC8174</a></cite>] when, and only when, they appear in all capitals, as shown here.</p><p style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1em 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: sans-serif; orphans: 2; widows: 2;">A<span class="Apple-converted-space"> </span><a href="https://www.w3.org/TR/json-ld11/#dfn-json-ld-document" class="internalDFN" data-link-type="dfn" style="color: var(--a-visited-text); border-bottom-width: 1px; border-bottom-style: solid; border-bottom-color: rgb(153, 153, 204); text-decoration: none; text-decoration-color: var(--a-visited-underline); text-decoration-skip-ink: none;">JSON-LD document</a><span class="Apple-converted-space"> </span>complies with this specification if it follows the normative statements in appendix<span class="Apple-converted-space"> </span><a href="https://www.w3.org/TR/json-ld11/#json-ld-grammar" class="sec-ref" style="color: var(--a-visited-text); text-decoration-color: var(--a-visited-underline); text-decoration-skip-ink: none;">§ <bdi class="secno" style="color: rgb(0, 0, 0);">9.<span class="Apple-converted-space"> </span></bdi>JSON-LD Grammar</a>. JSON documents can be interpreted as JSON-LD by following the normative statements in<a class="sectionRef sec-ref" href="https://www.w3.org/TR/json-ld11/#interpreting-json-as-json-ld" style="color: var(--a-visited-text); text-decoration-color: var(--a-visited-underline); text-decoration-skip-ink: none;">§ <bdi class="secno" style="color: rgb(0, 0, 0);">6.1<span class="Apple-converted-space"> </span></bdi>Interpreting JSON as JSON-LD</a>. For convenience, normative statements for documents are often phrased as statements on the properties of the document.</p><p style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1em 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: sans-serif; orphans: 2; widows: 2;">This specification makes use of the following namespace prefixes:</p>
Prefix | IRI
-- | --
dc11 | http://purl.org/dc/elements/1.1/
dcterms | http://purl.org/dc/terms/
cred | https://w3id.org/credentials#
foaf | http://xmlns.com/foaf/0.1/
geojson | https://purl.org/geojson/vocab#
prov | http://www.w3.org/ns/prov#
i18n | https://www.w3.org/ns/i18n#
rdf | http://www.w3.org/1999/02/22-rdf-syntax-ns#
schema | http://schema.org/
skos | http://www.w3.org/2004/02/skos/core#
xsd | http://www.w3.org/2001/XMLSchema#
<p style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: none; margin: 1em 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: sans-serif; orphans: 2; widows: 2;">These are used within this document as part of a<span class="Apple-converted-space"> </span><a href="https://www.w3.org/TR/json-ld11/#dfn-compact-iri" class="internalDFN" data-link-type="dfn" style="color: var(--a-visited-text); border-bottom-width: 1px; border-bottom-style: solid; border-bottom-color: rgb(153, 153, 204); text-decoration: none; text-decoration-color: var(--a-visited-underline); text-decoration-skip-ink: none;">compact<span class="Apple-converted-space"> </span><abbr title="Internationalized Resource Identifier" style="border: none;">IRI</abbr></a><span class="Apple-converted-space"> </span>as a shorthand for the resulting<span class="Apple-converted-space"> </span><a data-link-type="dfn" href="https://tools.ietf.org/html/rfc3987#section-2" style="color: var(--a-visited-text); text-decoration-color: var(--a-visited-underline); text-decoration-skip-ink: none;"><abbr title="Internationalized Resource Identifier" style="border: none;">IRI</abbr></a>, such as<span class="Apple-converted-space"> </span><code style="color: rgb(198, 53, 1); font-family: Menlo, Consolas, "DejaVu Sans Mono", Monaco, monospace; font-size: 0.9em; text-transform: none; text-align: start; font-variant-ligatures: normal; font-variant-caps: normal; font-variant-east-asian: normal; font-variant-position: normal; orphans: 3; widows: 3; break-before: avoid;">dcterms:title</code><span class="Apple-converted-space"> </span>used to represent<span class="Apple-converted-space"> </span><code style="color: rgb(198, 53, 1); font-family: Menlo, Consolas, "DejaVu Sans Mono", Monaco, monospace; font-size: 0.9em; text-transform: none; text-align: start; font-variant-ligatures: normal; font-variant-caps: normal; font-variant-east-asian: normal; font-variant-position: normal; orphans: 3; widows: 3; break-before: avoid;">http://purl.org/dc/terms/title</code>.</p>2. Conformance
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key words MAY, MUST, MUST NOT, RECOMMENDED, SHOULD, and SHOULD NOT in this document are to be interpreted as described in [BCP 14](https://tools.ietf.org/html/bcp14) [[RFC2119](https://www.w3.org/TR/json-ld11/#bib-rfc2119)] [[RFC8174](https://www.w3.org/TR/json-ld11/#bib-rfc8174)] when, and only when, they appear in all capitals, as shown here.
A [JSON-LD document](https://www.w3.org/TR/json-ld11/#dfn-json-ld-document) complies with this specification if it follows the normative statements in appendix [§ 9. JSON-LD Grammar](https://www.w3.org/TR/json-ld11/#json-ld-grammar). JSON documents can be interpreted as JSON-LD by following the normative statements in [§ 6.1 Interpreting JSON as JSON-LD](https://www.w3.org/TR/json-ld11/#interpreting-json-as-json-ld). For convenience, normative statements for documents are often phrased as statements on the properties of the document.
This specification makes use of the following namespace prefixes:
Prefix IRI
dc11 http://purl.org/dc/elements/1.1/
dcterms http://purl.org/dc/terms/
cred https://w3id.org/credentials#
foaf http://xmlns.com/foaf/0.1/
geojson https://purl.org/geojson/vocab#
prov http://www.w3.org/ns/prov#
i18n https://www.w3.org/ns/i18n#
rdf http://www.w3.org/1999/02/22-rdf-syntax-ns#
schema http://schema.org/
skos http://www.w3.org/2004/02/skos/core#
xsd http://www.w3.org/2001/XMLSchema#
These are used within this document as part of a [compact IRI](https://www.w3.org/TR/json-ld11/#dfn-compact-iri) as a shorthand for the resulting [IRI](https://tools.ietf.org/html/rfc3987#section-2), such as dcterms:title used to represent http://purl.org/dc/terms/title.
I want to add one other comment which I think is relevant. There are two approaches to cryptographic authenticity i.e secure attribution of data transmitted over the internet. 1) Cryptographically verifiable commitments to immutable data serializations that are securely attributable to a digital identifier. Essentially immutable serializations of data with attached digital signatures of some type (this includes ZKPs)
2) Cryptographically verifiable transformations of cryptographically verifiable data. This is highly experimental. But theoretically if I have a verifiable algorithm that mutates data, then given I can establish the authenticity of the source input data and also verify the authenticity of the transformation of that data, i.e the transformation was authorized in a verifiable way, then I can provenance the chain of transformations of that data from its verifiable source through every transformation of that data without having to keep around the intermediate forms. I can provide a proof to a downstream user of the data provenance transformation chain without having to keep around copies of all the intermediate transforms, because the downstream user can repeat the verifiable transformations. The verifiable transformation code as a verifiable algorithm has to be kept around, so this really on makes sense for single instruction multiple data type applications like AI. This enables a verifier to reproduce the sequence of transformations given the verifiable source inputs and verifiable algorithms.
Compared to number 1, tooling that supports Number 2 is relatively experimental, not well proven and much more difficult, complicated, harder to adopt, and risky. Technologically we are infants when it comes to verifiable algorithms for data transformations. We have had 30 years to figure out how to make brute force breaking of ECC digital signatures and Hashes computationally infeasible. These two are all the crypto we need for number 1.
Its easy to hand wave the number 2 merely because it sounds cool, but its not cool if its risky and hard to adopt.
RDF integrity proofs are more like number 2 than number 1. They are relatively new and therefore risky on a cryptographic time scale. And as I explained above they don't buy us much because its authenticity at rest we care about not merely integrity at rest. And most of the VC use cases are more compatible with number 1, that of a authorization sublayer to an authentication layer that merely depends on digital signatures for verifiable authenticity. And instead of verifiable algorithms to provenance transformations we just build a verifiable data structure made up of the results of the transformations appended in a chain or tree. This works today, no muss, no fuss and no fancy mechanisms. This seems like the practical path forward or at the very least the only reasonable starting point.
We have been calling this approach of using append to extend verifiable data structures, the Authentic Web. Because, in our opinion, the primary reason the internet is broken is not because we can't interoperate around the semantics of the data, but because we can't trust the provenance of that data in the first place. So lets decide on an authenticity mechanism, i.e. a trust spanning layer for the internet by which we can establish the authenticity of data. Given that authenticity layer as conveyor, we can convey whatever other facts we want to that are opaque to that layer. This makes the authenticity layer relatively simple. And we solve the provenance problem without complicating it with all the other things one wants to do with the conveyed data once its authenticity has been established.
Extensibility can be had but only by chaining, or appending or otherwise adding on to previous commitments not by in-place extension.
Does this mean one cannot simply "convert"(and by convert I mean map to a different data model) a "Verifiable Credential" to an "AnonCred" and vice versa but must perform some one way function to do so?
@SmithSamuelM Please edit your https://github.com/w3c/vc-data-model/issues/890#issuecomment-1294163332 and put code fences (single backticks, i.e., `
, are best for this) around each and every instance of @context
(making it `@context`
), such that that poor GitHub user ceases to be pinged for every interaction with this thread, in which they are only participating by force.
@David-Chadwick to review PR on issue #934 in the light of this requirement and propose a concrete text in schemas advanced concepts section if any. once those are done, will mark pending-close
@SmithSamuelM -- Please return once more to https://github.com/w3c/vc-data-model/issues/890#issuecomment-1294163332 and edit it to wrap each and every instance of @context
in backticks (`
), becoming `@context`
in your edit panel, so that that GitHub user ceases to receive pings for every update to this thread in which they are not a willing participant. (You've already done this for some instances, but that is not sufficient.)
The issue was discussed in a meeting on 2022-11-09
Seems blocked by the working group having not accepted a solution that supports selective disclosure.
I suggest closing until such an item exits, or leaving open until this item can be addressed.
Its like "potentially compatible with https://github.com/w3c-ccg/ldp-bbs2020/"... but you'll never know until we have a formal work item.
I could write text that would answer this regarding BBS LDPs... but its not clear where we would put that text.
Related to #999 and other issues
Seems blocked by the working group having not accepted a solution that supports selective disclosure.
I suggest closing until such an item exits, or leaving open until this item can be addressed.
along with @OR13 i don't see an answer that can be real here until we have a formal work item to consider
@AlexAndrei98
Extensibility can be had but only by chaining, or appending or otherwise adding on to previous commitments not by in-place extension. Does this mean one cannot simply "convert"(and by convert I mean map to a different data model) a "Verifiable Credential" to an >"AnonCred" and vice versa but must perform some one way function to do so?
One way to use AnonCreds with ACDCs append to extend is to treat an AnonCred as a ZKP corroboration to the claims in an ACDC created at presentation time by the presenter. The presenter composes an ACDC with the claims the presenter wants to disclose to the verifier. These are issued by the presenter. (In ACDC every diclosure is via ACDC not some custom presentation exchange data format). The presenter can then attach an AnonCred disclosure as corroboration that some issuer also made the same claims to some link secret under the control of the presenter. In ACDC proofs (signatures) are attached. This makes it easy to do multi-sig proofs, endorsements, and ZKP corroborations without transforming or converting. An attached corroboration is the correct way to use AnonCreds IMHO. Not as a VC itself but as a ZKP in support of the claims in the VC.
I think this is truer to the spirit of how @dhh1128 Daniel Hardman describes how AnonCreds ZKPs are meant to be used. https://daniel-hardman.medium.com/response-to-kaliyas-being-real-post-13fddb9410f0
Agreed at F2F that PR will create a second property, the verifier's schema, which tells the verifier what the schema of the disclosed VC will be
The issue was discussed in a meeting on 2022-09-15
I have been looking at the editorial changes that are needed in order to enact the decision made at the F2F meeting, and it is hard. This is because the current credentialSchema is used for two different purposes:
I don't like any of those options : (
It feels like this should be put on hold until we make progress on crypto suites.
I suspect a concrete example with BBS or SD-JWT will make this obvious in retrospec
BBS or SD-JWT will make this obvious in retrospec
Just wanted to say that I read Orie's post and had an unexpected smile. I am the most prolific source of typos that I know, but mine are never as clever and fun as this. I am intrigued by the idea of "retrospec." Since hindsight is 20/20, let's write one of those, perhaps harking back to the '80s or even the '70s. No flowers and bell bottoms, though. ;-)
I take it for granite that RDF has a retro vibe.
I can do this all day.
I've raised PR #1112 in an attempt to address this issue.
It falls short of defining a new property (which I thought was best in light of #1082), but if folks would like perhaps we could include requestSchema
in the table of reserved properties.
There are many different ways of implementing selective disclosure. Some send the whole credential with blinded property names and values, others send atomic credentials, others send assertions and proofs that the assertions are correct etc. If the verifier receives a selectively disclosed credential which has a credentialSchema property in it, in which some properties are said to be mandatory and some are optional (e.g. the ISO mDL specifies 11 mandatory attributes and 22 optional ones) but the verifier only requests a subset of these properties, and not all the mandatory ones (e.g. asking for date of birth from a driving license), then how should the credentialSchema property be utilised by the verifier, given that the received credential clearly does not match the credentialSchema as it is missing some mandatory attributes? I think we need to add some clarifying text to the data model to address this issue, because currently the DM states "data schemas that provide verifiers with enough information to determine if the provided data conforms to the provided schema."