w3c / did-core

W3C Decentralized Identifier Specification v1.0
https://www.w3.org/TR/did-core/
Other
398 stars 93 forks source link

need to clarify revocation vs. rotation #386

Closed dhh1128 closed 3 years ago

dhh1128 commented 3 years ago

Creating a new issue to track a tangent to #382.

There appears to be a misalignment in the community about the semantics of changing a key associated with assertionMethod in a DID doc. The relevant comments from #382:

https://github.com/w3c/did-core/issues/382#issuecomment-685248497 https://github.com/w3c/did-core/issues/382#issuecomment-685629401

Tagging @jandrieu and @SmithSamuelM who have already commented on the topic.

dhh1128 commented 3 years ago

@SmithSamuelM already commented on this, but I want to directly answer @jandrieu's question.

The whole point of digital signatures is that they are non-repudiable; they commit the signer to a historical action in a way that they can't deny. This is the basis for holding a party accountable. It's the non-repudiation that makes them useful.

If I use assertionMethod to sign a mortgage, I cannot say that when I rotate my key, it invalidates my commitment to pay the mortgage. The bank gave me money, and I incurred a debt. I can certainly sign a mortgage and then rotate my key; this prevents my old key from being used to incur new debts, but it doesn't cancel my old one. Rotation cannot be retroactive, because it would give the signer unilateral power to reinterpret history.

Revocation is different. It presupposes a shared understanding between multiple parties that unambiguous history needs to be given new semantics based on future developments. With that understanding in place, it's possible to construct mechanisms that allow either party on their own, both parties together, or even third parties to apply new semantics to a historical event, no matter how strongly attested. "Yes, Fred really did sign this mortgage. No question about it. But Fred wasn't legally competent when he did so. Therefore the mortgage is invalid -- not because we deny existence of the signature, but because we are viewing the event with different semantics."

We have to be able to tell the difference between rotation (unilaterally change rules for the future) and revocation (joint agreement about how/when to change semantics we apply to the past).

If an issuer uses assertionMethod to sign a VC, the issuer cannot say that when they rotate the key, it cancels their commitment to the assertions they once stood behind. That's because an unknown number of verifiers have already made decisions based on the reputational capital the issuer staked against their assertions; allowing the issuer to back out is like letting a borrower off the hook for a mortgage without any loan repayment. There exists no shared understanding that the issuer has this unilateral ability to escape accountability and reputational consequences for their actions.

Besides, if rotating a key cancelled all previous assertions, how would we do the other operation (the one that merely changes future possibilities, without attempting to apply a new lens to the past)?

I don't know what language in the spec, if any, we might want to revise to clarify this point, but I think there's no question this must be the understanding we impart.

SmithSamuelM commented 3 years ago

Yes the term revocation is used in two completely different ways in the identity space. In the key management world one may speak of revoking keys. In the statement issuance, authorization issuance or credential issuance world one may speak of revoking an authorization or a credential or a token.

This becomes confusing when revoking keys also implicitly revokes authorizations signed with those keys.

KERI terminology usually avoids that confusion because a key rotation operation is the equivalent of a key revocation operation followed by a key replacement operation. So one operation rotate instead of two operations (revoke and replace). A bare revocation is indicated by replacement with a null key. So only one operation is needed rotate where a special case of rotation is to rotate to a null key.

Given that then in KERI revocation is usually unambiguous applied to mean revocation of authorization statements. When in doubt just use the modifier key revocation vs statement revocation.

I hope that clarifies my terminology better.

SmithSamuelM commented 3 years ago

@dhh1128 Key rotation versus signed statement revocation. The authority of a signed statement is imbued to it by its signature and the keys used to create the signature. Is a signed statement authoritative/authorized after the keys used to sign it have been rotated? If not then the statement is effectively revoked as not longer being an authoritative/authorized statement. If the statement is still authoritative/authorized after the keys used to sign it have been rotated then is it not effectively revoked by the rotation itself but requires a separate signed revocation statement the rescinds/revokes its authoritative/authorized status. This revocation statement is signed by the current set of authoritative keys that may be different from the keys used to sign the statement being revoked.

Authorization tokens which are a form of signed statement often employ the rule 2) that when the keys used to sign the token have been rotated that this implies that the token’s authorization is revoked. Effectively the token is always verified by the current set of signing keys so it will fail verification after rotation. Whereas in Rule 1) the verification is w.r.t the set of signing keys used to create the signature at the time the statement was issued and signed. This means the verifier has to have a way if determining what the history or lineage of control authority was via a log or ledger to know that a statement was signed with the authoritative set of keys at the time. This means that the log or ledger must not only log the lineage of keys (key rotation history) but the statements signed by those keys (a digest of statement is sufficient). Otherwise a compromise of the current signing keys (which rotation protects from) would allow an exploit to create verifiable supposedly authorized statements after the keys have been rotated. So it either must be rule 1 or 2 or 3. And non-automatic revocation of signed statements requires a log of both the key rotation history and signed statement history.

Obviously if keys are not rotatable, then any signed statement may not be revoked by merely rotating keys but instead a revocation registry may be used to determine if a signed statement has been revoked by explicitly using a revocation statement. So non-rotatable keys may use a modified rule 4) where there is no key rotation history log or signed statement log but merely a revoked statement log. Although typically non-rotatable keys are used for ephemeral identifiers, in which case, revocation log is not used. Instead of rotating keys for ephemeral identifiers you just rotate the identifier (make a new one with a new set of keys) and abandon the old identifier and all its signed statements.

agropper commented 3 years ago

This makes sense to me. I always try to map these discussions into the healthcare use case where the physician is licensed by the DEA IFF they can be held accountable for signing a (controlled substance) prescription in a non-repudiable manner and also the prescription can be revoked when dispensed or "lost".

Under the DEA rules, the physician's authority to sign that prescription, can flow one of two ways. Either the process and tech itself is certified and audited, or another DEA credentialed person (maybe another physician or notary) is held accountable for issuing the credential of the prescriber. The issuer physician as effectively a notary. They are examining the process and other claims of the would-be prescriber, signing using their non-repudiable signature AND, importantly, making a log entry that they, as the notary, will keep of this transaction. The notary binds the issuing process (typically represented by a document such as a loan) to a non-repudiable signature of the prescriber (because the prescriber shows the notary a deduplicated, legally binding credential like a drivers license.)

This use-case is relevant because it is decentralized. The self-sovereign prescriber chooses the notary. The notary's logs are not public or aggregated but they are secure and auditable. The notary's logs have a reference to the document (the digest @SmithSamuelM mentions). For some related detail, here's a link to the recent HIE of One DHS SVIP (rejected) proposal for Trustee(R) Notary which I truly hope some in our community help us with.

It's turtles all the way down in the sense that the self-sovereign prescriber then takes on the role of issuer when Alice wants her Adderall. The prescriber signs her prescription, coded as a verifiable credential, and keeps the log with the digest of the prescription in case of audit. Revocation, of this VC remains an unsolved problem and this is where the DID aspect of SSI decentralization threatens to break down because the revocation mechanism has to be centralized and incredibly privacy-invasive for Alice. It's called a prescription drug monitoring program (PDMP) and the privacy issues they raise would fill a book.

Since this issue is about revocation of a VC, I would suggest that we need to give data holders the option to avoid the centralized revocation privacy problem by letting them authorize the verifier to get the VC directly from the issuer instead of passing the VC through an intermediate store. This is a privacy compromise because it leaks information to the issuer but it avoids the rotation problem because the VC can be ephemeral. Our SSI designs #382 must not take this option away from Alice. The logic is described in https://github.com/w3c/did-core/issues/370#issuecomment-683075977

OR13 commented 3 years ago

Consider that id_token signing keys are rotated regularly by OIDC providers.

https://developer.okta.com/docs/concepts/key-rotation/

^ the did document use case is identical, it just does not require HTTP or JWK/JWKs.

absence of a singing key in the expected location, implies revocation of all signatures.... you can see why caching can cause security issues.

Caution: Keys used to sign tokens automatically rotate and should always be resolved dynamically against the published JWKS. Your app might fail if you hardcode public keys in your applications. Be sure to include key rollover in your implementation.

^ this statements applies just as much to VCs as it does to id_tokens.

SmithSamuelM commented 3 years ago

@or13 You are confirming that in token based security systems verification of a token is against the current set of signing keys not the keys that were authoritative when the token was issued. That is they use Rule 2). This is in direct opposition to what the normal use case for VCs which is verification is against the keys used to issue the VC not the current set of keys. That is rule 1). Hence the confusion.

SmithSamuelM commented 3 years ago

Here is a related Issue in DIF for KERI use case for key rotation in IoT.

https://github.com/decentralized-identity/keri/issues/53

SmithSamuelM commented 3 years ago

@OR13 @dhh1128 @agropper

I think the ambiguity here is that how to interpret DID:Docs. Does a DID:Doc act like an OIDC authorization token which uses rule 2) or a verifiable credential which uses rule 1).

I suppose it depends on the DID method. This is one of the security problems of DID methods. There is no good expectation of behavior from one method to the next.

jandrieu commented 3 years ago

I agree that this is an under estimated issue. I brought up at the face-to-face in Amsterdam (and in a related issue #168) that we currently have no way to record in the DID Document anything about a key that is NOT currently valid.

I believe ALL spec-text in both DID and VC work discusses that the validity of the VC is checked against the current DID Document.

The notion that you need a "version" parameter in a DID URL (so you can check to see if a VC would have been valid) was also discussed in a side chat in Amsterdam, with what I thought was agreement that the current specification--even with a ledger-based tamper-evident history--cannot make any valid statement about whether or not a key was valid at any particular point in the past, precisely because there is no way to know if the key was revoked or rotated. If retirement is revocation because it was compromised, then you absolutely MUST not treat that key as valid. If retirement was because of rotation, I can see that past-credentials signed by a rotated key might be appropriate, but can also see arguments that rotation on its own doesn't mean that old keys should still be considered valid, especially when you can't know for certain when the VC was signed. It might have a date in the past, but be signed after the compromise, BY the party that compromised it.

So, two directions here.

  1. Is there spec text in DIDs or VCs that states that rotation--which retains validity for retired keys--is a thing? Maybe I missed that this is a supported feature. It'd be great if there were language that already supports the needs as discussed here.

  2. If you see DID Docs as a VC, then why aren't they signed? I'm not following the mental model that treating DID Docs as VCs suggests that there is anything that can be gleaned about non-current DID Documents. Like an expired passport, those keys are no longer valid and there is no documented explanation why. If you don't have an affirmative statement by the DID Controller, how can you know when and why a particular key is no longer listed? And if you don't know that, how can it possibly be good security to recognize potentially invalid, compromised keys?

I'd support adding language that changes the conclusion from #168 to allow explicit statements of rotation, revocation, or both, but other participants rallied rather convincingly against exactly that option.

OR13 commented 3 years ago

@SmithSamuelM

@OR13 You are confirming that in token based security systems verification of a token is against the current set of signing keys not the keys that were authoritative when the token was issued. That is they use Rule 2). This is in direct opposition to what the normal use case for VCs which is verification is against the keys used to issue the VC not the current set of keys. That is rule 1). Hence the confusion.

This is not correct.

https://w3c.github.io/vc-data-model/#proofs-signatures-0

The proof suite verification algorithm, when applied to the data, results in an acceptable proof.

There is no difference between verifying a JWS with kid did:example:123#kid for a VC and...

an id_token with iss https://issuer.example.com and kid....

The logic the verifier performs is the same. dereference the kid to public key bytes and check the JWS.

If dereferencing fails the token or credential fails verification.... the status of a verification of a token is orthogonal to the concept of getting a new VC / token because some key rotation occurred.

If you encode a kid that always dereferences to public key bytes, you have created a system which does not support key rotation... thats what putting a version did parameter in a DID URI does, its like rolling an authorization server that never rotates keys... consumers of such credentials / tokens should be suspicious... just like they are when the see a GPG key that never expires... or a root account that is used for everything.... its not a good idea.... but we see this stuff happen, IMO it should be allowed.

DID Documents are not VCs... DID Documents are well known jwks uri controlled by the did controller.

https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfig

"jwks_uri":
     "https://server.example.com/jwks.json",

In OIDC, it goes like this:

id_token -> iss -> well-known/openid-configuration -> jwks_uri -> kid - > verifyJWS -> true || false.

In DID + VC, it goes like this:

vc -> issuer -> DID Document -> kid -> verifyJWS -> true || false.

The fact that a person can look at past versions of the JWKs, and see points in time where a JWS would have been valid is orthogonal issue, it's useful for forensics, but it does not answer the question of "does this token / vc verify now".

The answer to "does this token / vc verify now" is obtained by dereferencing the verification method / iss + kid, and using the public key bytes to check the signature.

If dereferencing hits a cache / or relies on a read of an eventually consistent system (like a blockchain or slow distributed data base)... the answer might change when consistency is achieved or the cache expires.

If dereferencing is guaranteed to always produce public key bytes, thats like issuing from a did doc that does not support key rotation, like did:key or an authorization server that does not support rotation.

If Hyperledger Indy based ZKP Proofs do something different here, thats possible, I am not as familiar with how they do or do not support rotation of issuance material.

dhh1128 commented 3 years ago

The fact that a person can look at past versions of the JWKs, and see points in time where a JWS would have been valid is orthogonal issue, it's useful for forensics, but it does not answer the question of "does this token / vc verify now".

I strongly disagree. Verifying now has to be a question of whether the assertion was valid then (where "then" is the moment the assertion was made). All legal precedent is based on this kind of analysis. We're in court, arguing about whether Alice should be held accountable for transferring a million dollars to Bob's account. Alice says, "I now consider Bob a shyster, and I don't like what he's doing with my money. I don't stand by the signature I affixed to the transaction. I want my money back." Bob says, "Well, Alice may not like me, but did she or did she not sign?" Who is going to win?

Signing a VC is no different than signing a a money transfer -- and indeed, VC use cases that involve money (or that have any sort of legal meaning) will depend on these semantics. Any other interpretation gives the issuer unilateral power to deny what they've said on the record, makes reputation useless with respect to issuance, and undermines the whole system.

dhh1128 commented 3 years ago

BTW, the need to have some kind of unwinding of assertions is the reason why we need a VC revocation feature. If an issuer knows in advance that they may have reasons for withdrawing their support from an assertion, they can publish to the world their intention to support revocation with respect to the VCs in question. For example, a driver's license bureau can say, "We're making our driver's licenses revocable. When we revoke a driver's license, this means we have learned since the date of issuance that we no longer want to assert the person's right to drive. If a license was issued on Jan 1 and is revoked on July 1, the person no longer has the right to drive after July 1. But if the person is sued for a reckless driving incident that occurred on May 1, revocation doesn't mean that their right to drive on May 1 was retroactively cancelled. They were indisputably a licensed driver on May 1; revocation took effect from July 1 onward."

SmithSamuelM commented 3 years ago

I copied this discussion to the DIF KERI repo because it had relevance to KERI but realized that new comments by Orie that I responded to were there not here.

These are relevant. https://github.com/decentralized-identity/keri/issues/52#issuecomment-691249959

SmithSamuelM commented 3 years ago

The conflict is the verification model or rules. As I pointed out above, authorization tokens use a different rule than what most VCs use. The rule for authorization tokens is automatic revocation of the token if keys are rotated. So in this sense @OR13 is right that a log of when an authorization happened is merely forensic since a key rotation revokes all tokens issued prior. One may call this an ephemeral authorization model. Tokens carry an ephemeral authorization that does not require a specific revocation action. Key rotation simplicity revokes. However the VC model of Issuer, Holder, Verifier is a persistent authorization model. The credential is valid until specifically revoked. which means that the only way to determine how to verify the credential is to know the history of revocations. Its true one could embed the public keys used to issue credential in the VC but then one still doesn't know if they were valid at the time of issuance (a type of replay attack) unless one knows when in the key rotation history the credential was issued. In this regard @dhh1128 is right.

SmithSamuelM commented 3 years ago

There seems to be some strong cognitive dissonance on verification models. This is why I detailed the three choices of rules. 1) 2) and 3). So one can pick which set of rules one uses but that selection determines how you build your system. There are all valid choice but not simultaneously valid in the same system. One has to pick.

SmithSamuelM commented 3 years ago

This is one reason why "token" security models are not necessarily a good fit for VCs. They use an ephemeral model of authorization and VCs use a persistent model. They are not the same and should not be confused.

SmithSamuelM commented 3 years ago

What that means is that tooling designed for the ephemeral model is of little use in a persistent model. Trying to leverage tooling from one model for the other will be an exercise in frustration.

SmithSamuelM commented 3 years ago

Using an ephemeral model for DID:Docs and a persistent model for VC issued against DID based on those DID:Docs will likewise be an exercise in frustration.

SmithSamuelM commented 3 years ago

AFIK most implementations of Object capabilities also use an ephemeral authorization model, although there is nothing in the object capability concept that requires it. An implementation could use a persistent authorization model. But clearly credentials are much more than authorization tokens and that means IMHO that a persistent authorization model works best for them. As the uses cases for both DIDs and VCs clearly express. This may be yet another "issue" that is problematic for the DID spec community because we have a clash of unexpressed but assumed authorization models for DID:Docs and VCs.

SmithSamuelM commented 3 years ago

VCs are the new kid on the block. There were designed without expressing conceptually what their authorization model looks like. But revocation registries clearly indicate that its a persistent one. This becomes glaringly obvious when you answer the question. Does key rotation of the issuing DID require that all issued VCs must be re-issued? If the answer is no then your have a persistent authorization model. If yes then an ephemeral one like a token.

SmithSamuelM commented 3 years ago

So we can ask the same question of a DID:Doc. If I use a persistent authorization model for a DID:Doc then I have just made it possible to use a VC as an ersatz DID:Doc.

SmithSamuelM commented 3 years ago

If DID resolution metadata returns a proof of control authority (BTCR, KERI, Sovrin State proof) then I don't need a Did:Doc to use a DID. I can just use the DID resolution meta data to establish authoritative signing keys and then issue a VC that contains everything else I ever wanted to put in the DID:Doc. The DID Spec gets really simplified as a result. The serialization wars and interoperability between serializations becomes a non-issue.

jandrieu commented 3 years ago

@dhh1128 I agree with your basic argument, that the use case for checking a signature would benefit from being able to check if the signature was legitimate at the point of signing.

However, can you point to any spec text in VCs or DIDs that describes verifying anything in that manner?

I think we have quite a bit of work to do to adjust the language and existing consensus to support how you have been thinking about this requirement.

If there is language in existing spec-text that already discusses verification of credentials for which a key had been rotated (and hence, not in the current DID Document), then we can build on that. But, when you say

They use an ephemeral model of authorization and VCs use a persistent model.

I don't see anywhere in the actual standard that supports this. So, any spec text you can point to will go a long way to helping us build some bridges.

My attempts to address this exact point (in issue #168) were roundly rebuffed by multiple, competent members of this community. So, we have some work to do to get support for what you want.

SmithSamuelM commented 3 years ago

@jandrieu I would be very surprised if the spec text mentioned such a requirement. As I pointed out above it has now become glaringly obvious that the authorization model has been an "unexpressed but assumed one" and as this issue points out. One that is assumed differently by different practitioners. Its only recently that security models has had more than a smattering of discussion in the DID spec. IMHO it was just assumed that if one is using a ledger than security comes for free. But the DID methods are a security problem and did:doc authoritativeness is just one aspect of that bigger problem. So the answer is not to point to existing spec language but to point out that we need spec language otherwise we will run aground yet again on security..

TelegramSam commented 3 years ago

Small note to indicate that I think the persistent model is required for DID Docs, and I support the separation of concerns between revoking keys and revoking credentials. Let's revoke credentials independently.

SmithSamuelM commented 3 years ago

@jandrieu "My attempts to address this exact point (in issue #168) were roundly rebuffed by multiple, competent members of this community. So, we have some work to do to get support for what you want." It think there is a nuance between what we are discussing here and issue #168. Unfortunately, its an issue that quickly gets muddled. In the KERI concept of a Self Certifying Identifier (SCID). There is a root control authority that is held by the controller of the SCID. Because the derivation of a SCID consists of one or more cryptographic one way functions from a public key or keys. It is very clear how that root authority starts is held and may be exercised. It is starts with the public keys used to derive the SCID and is held and may be exercised by the holder of the associated private keys. So where I talk about key rotation I am only talking about the key rotation associate with this root authority not any other set of keys. So any other keys besides the root authority are immaterial to establishing and maintaining this root authority. This "other" keys may be encryption keys or delegated signing keys or any number of keys but they are not participants in the root authoritative keys (although they may be derived therefrom). So in this issue the discussion on key rotation is solely about the root control authority. With reference to a DID:doc then when one has a clear sense of the keys that constitute the root authority for the identifier then it becomes obviously clear that those are the keys that are the root authority for the DID:doc as well. Any other construction is fraught with potential security problems. The idea that a did:doc is some generic key authorization mechanism causes all sorts of confusion because one feels enabled to partition root control authority among multiple sets of keys. Then rotation almost becomes non-sensical. There be dragons. Clearly root control authority needs to be precise and atomic. If one is using multi-sig then there are multiple keys in the set but they are all of the same class in terms of authority and belong to a defined multi-sig scheme. They are not a hodge podge of keys with varying degrees of authority or function.

jandrieu commented 3 years ago

So, @SmithSamuelM, that suggests we need to be clear about the reality that one cannot rotate the "root keys" that control a DID Document, you can only revoke them?

And as such, the current design for root authority works for you? Namely, that if a root authority is not present in the CURRENT DID Document, then it cannot be used for verifying a VC as current?

The consequence of that is that--even in a version of DIDs that aligns with KERI--if you want to be able to rotate the keys without invalidating a VC, then you cannot sign the VC with the root authority. Perhaps you can sign it with a key in a verification method other than the root authority, but the root authority itself is revoke-only, not rotatable, and hence not usable for rotation-surviving VCs. Correct me if I'm misunderstanding.

HOWEVER, we still don't have a way for a non-root authority to be specified as valid for VCs issued for some specific period. Some DID methods may allow an implicit capability for this (DLTs can be used as timestamps), but there is currently no way to state that a given proof mechanism should be treated as valid for proofs in a certain time frame while a different proof mechanism should be treated as invalid. The only check we have now is for the proof to match a verification method in the CURRENT DID Document. Which means rotating a key invalidates all previously issued credentials.

The current language, to the best of my knowledge, has zero support for rotating keys of the issuer while maintaining the validity of existing VCs.

What we do support is the rotation and revocation of a subject's keys, such that proof-of-control as identity assurance regarding claims in a VC remains functional throughout the lifetime of the DID. Because proof-of-control is based on the current DID Document, so there is no notion that older proofs-of-control remain valid.

I realize the Evernym world doesn't use DIDs for subjects that are human individuals, but others in this ecosystem rely on that rotatability for long-lived credentials, where the privacy benefits of link secrets is outweighed by the rotatability of keys in DID Documents. So we can't get around this by saying "But DIDs are for public entities!" As Drummond did in #382.

Again, I agree it would be worthwhile to support statements that a particular key should be considered valid for proofs created in a limited time frame. But we currently have no way to do that.

Unless someone can point to some spec text that gives us that ability--or asserts that ability even if vague on how--then I fear we are stuck trying to solve a problem that is, at this point, a new feature.

SmithSamuelM commented 3 years ago

@jandrieu I guess I wasn't clear. The root authority is transferable to a new set of keys via a rotation. That is what KERI does is provide a key rotation methodology for the root authority.

jandrieu commented 3 years ago

@SmithSamuelM I thought we were talking about DIDs. Are you saying then that the current DID and VC specs don't support KERI, but that they should?

Or are you arguing that KERI provides a proof mechanism that allows using information outside the DID Document as part of verification? Without any changes to the spec?

I expect I'm simply misunderstanding how you think KERI allows a conformant verifier to somehow use a key that isn't in the current DID Document as if it were still valid. Or, conversely, how it would allow a key in the current DID Document to be valid for VCs that appear to be issued in a certain time period other than NOW, but to reject validity for signatures outside that time period.

SmithSamuelM commented 3 years ago

@jandrieu

I know its a lot to ask but reading the KERI WhitePaper is the best way to answer your questions. https://github.com/SmithSamuelM/Papers/blob/master/whitepapers/KERI_WP_2.x.web.pdf

But here is a short answer. KERI is namespace agnostic. A DID is a namespacing syntax. All KERI cares about is the cryptographic identifier prefix in the namespace which in the DID case is the Cryptographic UUID portion of the method-specific-I’d. Keri calls this the identifier prefix. KERI key event logs provide a proof mechanism for establishing the control authority i.e. the set of keys that have the root control authority over this prefix. Any and all namespaces that share this prefix have the same control authority. THis makes KERI completely agnostic about the name spacing mechanism. So it works for DIDs but could work for any other standard namespacing mechanism. KERI doesn’t care.. That’s why it may be a universal identity system security overlay for the internet. So as long as a resolver provides the key event log in metadata then anyone can verify cryptographically from the key event log that a given set of keys (including rotations) is the current authoritative or root authority for the prefix. Each key event log is a verifiable signed hash chained data structure. A prefix unique dedicated block chain. Because the logs do not co-mingle events between two distinct prefixes then one can delete a log and the prefix is forgotten as in GDPR compliant. But any copy of the key event log may be used to verify and so KERI prefixes are portable between infra-structure mechanism that store the event logs. So if you want to store your KERI KEL (key event log) on there rum or Sovrin or Bitcoin or all of them or none of them just use IPFS or just a set of servers of your choosing. It still works.

The primary root-of-trust is that the unique identifier prefix Is originally derived from the information in the inception event which includes the originating set of keys that form the original root authority. One then just keeps a lineage of transfers of authority (rotations) in the key event log. Each transfer is a signed hash chained verifiable operation. Its the simplest form of ledger needed to establish control authority but no simpler.

The proofs of how the KAACE (KERI’s Agreement Algorithm for control establishment) is provided in the white paper.

SmithSamuelM commented 3 years ago

So in addition to the lineage of root authoritative keys, the KEL allows one to anchor hashes or Merkle roots of external data. You can then put whatever authorization your want in this external data and anchor it at a point in the lineage. The anchor point then determines what the authoritative keys were at the point of the anchor. So one could for example anchor a version of a did doc or a VC or anything else. KERI doesn’t care.Querying the event log just says hey here is an anchor the anchor is a cryptographic digest (hash) of some external data. The existence of the anchor in the event log is proof of a cryptographic commitment made by the controller of the prefix (and hence the associated) event log to the data that generated the digest. So one could build any kind of external transaction system or authorization system Using this construct and make it verifiable. Proof of control authority + proof of commitment to a digest of external data made with that control authority is all one needs to build whatever on top.

SmithSamuelM commented 3 years ago

To bootstrap KERI one needs a discovery mechanism. The mechanism could be a DHT. IPFS for example or a KERI specific one or a A DID resolver network. Proof of control authority is what one needs to get from discovery. Once one has that then anything else one chooses to discover can be verified as authoritative relative to the event log. A discovery mechanism could cache the event log or it could provide a url to a service endpoint where the event log is held. It doesn’t matter where you get the event log from, because its internal consistency is verifiable. It could be from a DID:Doc or not. My suggestion is to use resolver metadata. Once you have discovered the event log one may also want to discover communication parameters such as routing and encryption keys for talking to service endpoints. Like a DNS Zone file. But importantly, once your have the KEL you can verify if the communications parameters you later discover by whatever means are authoritative because they will be signed and depending on your rules may be committed too as some point in the event log. Originally communication parameters were supposed to be in the DID:Doc but over time the DID:Doc became something else. It keeps changing. But communication parameters could just be resolver meta-data as well. But KERI doesn’t care because KERI just defines the protocol for creating and maintaining the event log.

jandrieu commented 3 years ago

Ok. KERI is on my reading list, but I'm still not understanding what your point is relative to DIDs.

Are you arguing for a particular change to the DID or VC spec?

Because today I don't believe you can do what you are requesting with DIDs and VCs as specified.

If what you are saying is that "Yes, you can do it, if you use KERI on top of DIDs", then that's one conversation to have. And I'd like to understand how you can verify a credential issued by a DID whose keys are no longer in the DID Document. Because I don't believe you can do that, today.

But it is entirely thing altogether if you are saying that DIDs and VCs already have the rotation/revocation semantics you are arguing for. Again, because the language as I've seen it (and the use cases we've documented) has currently zero support for anything other than verifying a VC against the current DID Document.

Can you describe what you want changed in DIDs/VCs or clarify that DIDs and VCs are fine the way they are (perhaps because KERI handles that at a different layer)?

What I'm not seeing is how KERI solves this problem when KERI is not part of the DID or VC spec. Which is to say, is the problem with the current DID & VC specs or with people's understanding of what you can do on top of the DID & VC spec?

Are you trying to clarify that, in fact, you can rotate separately from revoking in the specs today? OR Are you advocating for changes in order to support distinctions between rotation and revocation?

agropper commented 3 years ago

Looking at https://w3c.github.io/did-core/#architecture-overview Figure 1, I might say that the KERI Log for that DID is the VDR. Whomever can write to the KERI Log is the DID Controller. The writing to the KERI Log is method dependent. The DID Document may or may not matter (as in DID:Key, it doesn't). VCs probably don't come into it at all.

How'm I doing?

On Sun, Sep 13, 2020 at 5:29 PM Joe Andrieu notifications@github.com wrote:

Ok. KERI is on my reading list, but I'm still not understanding what your point is relative to DIDs.

Are you arguing for a particular change to the DID or VC spec?

Because today I don't believe you can do what you are requesting with DIDs and VCs as specified.

If what you are saying is that "Yes, you can do it, if you use KERI on top of DIDs", then that's one conversation to have. And I'd like to understand how you can verify a credential issued by a DID whose keys are no longer in the DID Document. Because I don't believe you can do that, today.

But it is entirely thing altogether if you are saying that DIDs and VCs already have the rotation/revocation semantics you are arguing for. Again, because the language as I've seen it (and the use cases we've documented) has currently zero support for anything other than verifying a VC against the current DID Document.

Can you describe what you want changed in DIDs/VCs or clarify that DIDs and VCs are fine the way they are (perhaps because KERI handles that at a different layer)?

What I'm not seeing is how KERI solves this problem when KERI is not part of the DID or VC spec. Which is to say, is the problem with the current DID & VC specs or with people's understanding of what you can do on top of the DID & VC spec?

Are you trying to clarify that, in fact, you can rotate separately from revoking in the specs today? OR Are you advocating for changes in order to support distinctions between rotation and revocation?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/w3c/did-core/issues/386#issuecomment-691728212, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABB4YNPWPCYA76VJAF2NXLSFU2TJANCNFSM4QS73RDQ .

SmithSamuelM commented 3 years ago

@agropper. Good except for one thing. I would say instead. The writing to the KERI log is method independent. It’s locked down by KERI itself. The only entity that can write to the KERI log is the DID controller as defined by the KERI events.

agropper commented 3 years ago

Yes, of course. Thank you!

On Mon, Sep 14, 2020 at 9:25 AM Samuel Smith notifications@github.com wrote:

@agropper https://github.com/agropper. Good except for one thing. I would say instead. The writing to the KERI log is method independent. It’s locked down by KERI itself. The only entity that can write to the KERI log is the DID controller as defined by the KERI events.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/w3c/did-core/issues/386#issuecomment-692050196, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABB4YNTDQ56PZB4IZEEH2LSFYKSXANCNFSM4QS73RDQ .

OR13 commented 3 years ago

If its not possible to produce a key identifier that is the same for 2 keys which are different, then the security issue I noted above wrt did documents and rotation is resolved... any did method that lets you identify public key bytes differently, supports "key rotation" at the key identifier level.... same identifier, different key bytes....

signature 1 with public key 1 => passes signature 1 with public key 1 => fails when public key 1 resolves to different bytes.... but fails because the bytes are different, not because the key is absent.... signature 2 with public key 1 => passes when public key 1 new bytes are used to produce the signature... see why it's a bad idea to reuse identifiers?

If your key identifier is embedded in the signature.... this property is extra confusing... now I need to know what the public key bytes where, at the time of issuance as @dhh1128 noted above.... or i need to just respect the current version as the only version that matters for "verification"....

Best solution to this problem? don't allow the same identifier to EVER be used for different public key bytes... DID core suggests you avoid this for JWK already... its the entire reason for https://tools.ietf.org/html/rfc7638

Treat key rotation as equivalent to credential revocation when the bytes of the new key don't match the old, and the same identifier is used... thats what OIDC does.... and it's the reality of what verification will mean for the latest did document, regardless of feelings on the subject.

Second best solution? support version and issue credentials from a "version" of a did document... and let the verifier decide if it matters if its the latest or not.... obviously... this imposes a burden on the verifier.... imagine if every OIDC RP had to decide if an access_token should be accepted based on if the key had ever been in the well known jwks...

If I issue you a credential from verificationMethod did:example:123?version=351231#key-0... and it always resolved to public key bytes... what does that tell you?

  1. did:example:123?version=351231#key-0 will always produce "dbDmZLTWuEYYZNHFLKLoRkEX4sZykkSLNQLXvMUyMB1"
  2. private key-0 is "47QbyJEDqmHTzsdg8xzqXD8gqKuLufYRrKWTmB7eAaWHG2EAsQ2GUyqRqWWYT15dGuag52Sf3j4hs2mu7w52mgps", now anybody can issue credentials from that version of a did document... and signatures will of course verify....
  3. maybe I issued some credential to you before I leaked my private key... maybe I didn't... hope your verifier is smart enough to figure that out for themself... if you send me bitcoin, I will tell you when I compromised your keys ;)
  4. Either, I have rotated #key-0 to new public key bytes at this moment in time, or I am horribly compromised, and should not be trusted, because I have terrible opsec.

TL;DR.... don't reuse key identifiers ever.... don't issue credentials that will always verify regardless of the "latest" version of their root... rotate keys as frequently as you can... if you need to rotate them faster than you want to update your did document... don't put them in your did document... put them behind a service endpoint, like:

did:example:123?service=myEphemeralKeyServer&relativeRef=/keys/GUID#JWK-THUMBPRINT

Don't ask a verifier to do anything but dereference a string and very a proof with the public key.... If you hand them a thing which will always verify, you have handed them a homework problem (was this key ever compromised), not a credential / id_token....

DO provide an audit log of all keys ever associated with an identifier, and when they were added, removed, updated...

DO NOT say why they were added, removed, updated.... (compromise / recovery from compromise is one reason... are their others?)

DO make that audit log cryptographically secured (KERI / Hashed linked list, etc...)

And of course, feel free to ignore security best practices if you think you know better :)

SmithSamuelM commented 3 years ago

@OR13 This is good. As which rule you use. Determines how your authorization system works. You are clearly coming down on the side of rule 2) below. i.e. the security token OIDC rule. Any method should be free to decide which rule they use. My concern was that no one was clearly specifying what set of rules they were using.

Just so someone can better understand I am copying the rules here as they are originally way up in another thread.

Authorization Rules/ Models

Rule 1) Persistent Authorization Model

A signed statement using the current authoritative set of keys at the time of the signature, is valid until revoked or rescinded with another signed statement. This means that merely rotating keys does not revoke or rescind the validity of prior signed statements. Otherwise every time you rotate keys your would have to reaffirm (reissue) every prior statement signed with the now absolute keys.

Rule 2) Ephemeral Authorization Model

All statements issued/signed with a given set of keys are automatically revoked when the authoritative keys are rotated. This view (rule 2) of mandatory revocation (reissuance) is a common rule in token based security approaches where all tokens issued under a given set of keys are automatically revoked when you rotate keys.

In order for Rule 1) to be practical one needs to maintain a ledger or log of statements signed with a given set of keys or at least a cryptographic commitment to the hashes of the log of statements (merkle tree or hash chained data structure) so that one can verify that a statement was signed with the then authoritative set of keys.

So if one is not using a log (ledger, etc) of signed statements then Rule 1) is unworkable and Rule 2) is the reasonable one.

A hybrid would be:

Rule 3) Some Authorizations are Persistent and Some are Ephemeral Model.

In this model, only logged signed statements use rule 1) and all other signed statements use rule 2). Any presentation of a signed statement must include a reference to its location in the log to determine the authoritative keys at the time (location) in the log. If the log reference is absent then one checks the current authoritative keys and if they differ then the signed statement is stale (invalid).

Clearly the issuer-holder-verifier model of VCs is problematic with respect to rule 2) especially at scale because your have coupled your key management (rotation recovery) to the expiration rules for your VCs.. The issuance of large numbers of credentials especially credentials that are time expiring becomes unwieldy.

A DID method should specific which rule 1) 2) or 3) is to be used to verify signed statements associated with the keys for that DID.

The DID:Doc is one of the types of signed statements not simply VCs.

But in order to support 2) or 3) a verifiable log is required.

Each DID method should explicitly define which rule 1) 2) or 3) is to be used when verifying signed statements. As far as I know no DID method explicitly does this. Its implied.

@OR13 what it looks like is that you are advocating rule 2) for DID:Doc as signed statements but rule 1) for VCs which means you are using a type of rule 3). Is that correct?

SmithSamuelM commented 3 years ago

@OR13

Treat key rotation as equivalent to credential revocation when the bytes of the new key don't match the old, and the same identifier is used... thats what OIDC does.... and it's the reality of what verification will mean for the latest did document, regardless of feelings on the subject.

So if I understand correctly, you are advocating a policy that requires one to reissue all verifiable credentials issued against an identifier whenever you rotate the keys in a DID:Doc.

Second best solution? support version and issue credentials from a "version" of a did document... and let the verifier decide if it matters if its the latest or not.... obviously... this imposes a burden on the verifier.... imagine if every OIDC RP had to decide if an access_token should be accepted based on if the key had ever been in the well known jwks...

If I understand, you are making the authoritative root-of-trust your DID document version. This means that the delivery mechanism for DID:Docs by version must be secure. Its not merely an auditing issue. But as long as you have a mechanism for locking down the control authority (such as KERI) for signing did:doc versions then you can use the DID:Doc as an authorization mechanism.

TL;DR.... don't reuse key identifiers ever.... don't issue credentials that will always verify regardless of the "latest" version of their root... rotate keys as frequently as you can... if you need to rotate them faster than you want to update your did document... don't put them in your did document... put them behind a service endpoint, like:

did:example:123?service=myEphemeralKeyServer&relativeRef=/keys/GUID#JWK-THUMBPRINT

So what you are proposing here, is that one should use ephemeral identifiers to issue credentials from service endpoints. So your root-of-trust becomes your service endpoint not a self-certifying identifier. Rotating keys means rotating ephemeral identifiers and the constant is the service endpoint. This means that the security of the service endpoint is now your root-of-trust. You can lock down the service endpoint as long as the verifier knows how you are locking it down. What is the root-of-trust of your lock down mechanism? If its another service endpoint then you have just recreated the infinite regress problem that is the current CA mechanism.

A perfectly valid way to use ephemeral SCIDs is to rotate the keys by rotating the identifier. This requires some registry that maps the role/function of the identifier=keys to the authorization. One can use a DID:Doc for this but now the root-of-trust becomes the did:doc as verifiable data registry. When the DID:Doc is exposing service endpoints as the ephemeral identifier not a SCID then the service endpoints must be separably locked down. The two step indirection of issuing from the service endpoint as root-of-trust is also punting the security to how your are locking down your service endpoint.

Some root-of-trust in your system must be persistent otherwise its not a root of trust. A Ledger is valid or an SCID+log like KERI. But ultimately if your root-of-trust supports key rotation for its security then you have to pick a rule. if you back out key rotation on that persistent root-of-trust you come full circle to the original problem. The indirection to a service endpoint doesn't ultimately solve the problem of your root-of-trust it just pushes it somewhere else.

There is no free lunch here. Either all changes in control authority are end verifiable to a fixed root of trust or they are not. Any indirections have to resolve to something that is end verifiable to a fixed root of trust.. If its not end verifiable to a fixed root of trust then you have recreated the trusted entities problem.

OR13 commented 3 years ago

I'm sure my preferences won't match everyone elses.

DPKI

Layer 0 - Immutable Ordered Events, linked and signed. Layer 1 - "ReadModel" CQRS style eventually consistent set of "keys" Layer 2 - "ReadModel Versions" can ask for a version for the read model as of a moment in time / set of event)... but thats not the current set of authoritative keys...

Credential / Tokens / Assertions / Proofs

Layer 0 - verification material is identified with strings Layer 1 - verification material is identified with strings that contain version information

99% of folks are probably better off using DPKI Layer 1 and Credential Layer 0 only.... with identifiers that always point to current version and running 100% of the DPKI infrastructure themselves....

Of course they must still trust the tool chain, ledger, os, etc....

I am not suggesting using service endpoints for everything... I am suggesting rotating keys as quickly as possible, and considering all credentials / tokens / proofs invalid when they fail to verify against the latest version of a did document...

If updating your did document is expensive / takes a long time.... you may want to rely on service endpoints... and did urls... and the authority remains with the controller, because the did controller determines the service endpoint... and they can set it to IPFS / TOR Onions, https.... its up to them.

I am recommending against creating things that will "always verify"... because they will be "indistinguishable from compromised activity" when the private key is leaked, in absence of some authoritative timestamp for when compromise happened... and who will be the root of trust for that information?.... but not everyone will agree, thats fine.

SmithSamuelM commented 3 years ago

@or13 The control authority of the DID controller must be end verifiable always or the DID controller becomes just a euphemism for trusted entity.

agropper commented 3 years ago

It's a philosophical issue as to whether an identity is tied to authentication or authorization. I tend to think of that as a false dichotomy - it's always both and we should design DID security and privacy from that perspective.

On Mon, Sep 14, 2020 at 2:37 PM Samuel Smith notifications@github.com wrote:

@OR13 https://github.com/OR13 The control authority of the DID controller must be end verifiable always or the DID controller becomes just a euphemism for trusted entity.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/w3c/did-core/issues/386#issuecomment-692237537, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABB4YKBVXMZC7QFLKMYXLDSFZPGHANCNFSM4QS73RDQ .

OR13 commented 3 years ago

@OR13 The control authority of the DID controller must be end verifiable always or the DID controller becomes just a euphemism for trusted entity.

I assume you will audit 100% of the assembly code produced by your rust compiler for KERI. Once you have done that, I invite you to repeat the process for your OS that runs the assembly.... :)

The did controller is a trusted entity... in that you trust them not to have trash opsec... a KERI identifier for a dude who tweets private keys remains untrustworthy....

SmithSamuelM commented 3 years ago

@OR13

I assume you will audit 100% of the assembly code produced by your rust compiler for KERI. Once you have done that, I invite you to repeat the process for your OS that runs they assembly.... :)

Thats an interesting comment. I just spend a lot of time recently in a discussion with someone form the TEE/Trusted Computing Group community whose position is that one MUST use TEEs for all implementations of any signature verification instances. The answer to this question is that yes one definitely may benefit from such an approach for the reason you just cited. But the better answer is that one may use what are called threshold structures to compensate for varying degrees of vulnerability in your verification infrastructure. For example threshold structures show up in MFA, Multi-Sig and distributed consensus. So one can use a pool of verifiers (in KERI watchers) to compensate for not using a TEE. The threshold structure allows one to multiply the degree of difficulty of an attack without making each attack surface individually more secure. This early paper by Szabo makes a good explanation Szabo, N., “Advances in Distributed Security,” 2003 https://nakamotoinstitute.org/advances-in-distributed-security/

So yes one can always dig deeper in the hardware/software/firmware infrastructure for verification and say you are trusting that and that may not be secure. But practically the degree of difficulty of an attack inverts at some point. So picking where you draw the line and then employing a threshold structure of sufficient multiplicity allows one to compensate for any lack below that line. For example the Ethereum Gnosis Multi-sig wallet has billions of assets under its protection but so far there have been no successful public exploits (unlike the Parity wallet) Whereas there are numerous exploits of single-sig wallets of all types. So using a multi-sig multiplies the difficulty sufficiently to make the likelihood of successful attack exponentially more difficult. So an appropriate design finds the right level where the difficulty to the attacker becomes hard enough given the value of the attack reward to be secure. So for many TLS using DNS/CA is secure enough. Therefore is all that the use of DIDs is accomplishing to to avoid the renting of my identifier problem, and is not materially contributing to any advances in security then one needs to publicize that. But if one is promising the security of a ledger as an advantange to using DIDs but then abrogates it but relying on DNS/CA TLS endpoint security then its false advertising.

The did controller is a trusted entity... in that you trust them not to have trash opsec... a KERI identifier for a dude who tweets private keys remains untrustworthy....

Absolutely agree. The assumption is that secrets remain sufficiently secret.

The point is not that. The point of decentralized identity for security is: It’s much easier to secure one's own keys well than to secure everyone else's internet computing infrastructure well.

The point of decentralization is that security is under the control of the controller not anyone else. If the controller is an idiot then they get what they deserve. But when your security is under the control of some other entity's op-sec then you are vulnerable to stuff you have no control over. If you are happy with some "trusted" entity's op-sec then why are you using DIDs in the first place? Save yourself the trouble and just use OIDC with DNS.

SmithSamuelM commented 3 years ago

To further elaborate.

If I use an end-verifiable infrastructure then I can take advantage of the continuing advances in light weight TEE technology. For example the DICE standard from the TCG

TCG, “Implicit Identity Based Device Attestation,” Trusted Computing Group, vol. Version 1.0, 2018/03/05 https://trustedcomputinggroup.org/wp-content/uploads/TCG-DICE-Arch-Implicit-Identity-Based-Device-Attestation-v1-rev93.pdf

Provides for inexpensive TEEs where the firmware is protected by a SCID construct (what they call implicit identity is equivalent to a SCID). If the firmware changes then on the next powerup the hardware based identifier will change which is derived from a hardware pseudo-random number generator that is triggered by a firmware change. Which means that the TEE can make a verifiable attestation to the firmware it is running. So for example if the firmware is acting as a signature verifier (ie KERI watcher) then if someone hacks the verifier its detectable. So as this tech gets cheaper and more common one immediately benefits without changing anything in the protocol just choosing different watchers (verifiers) that are now TEE protected.

SmithSamuelM commented 3 years ago

We are not going to replace the internet's security model all at once. It will happen gradually. But in order to fully replace it we have to have a complete security model that allows us to fully replace it.

OR13 commented 3 years ago

We are not going to replace the internet's security model all at once. It will happen gradually. But in order to fully replace it we have to have a complete security model that allows us to fully replace it.

Agreed,

  1. key rotation is a good thing.
  2. key revocation is a good thing.
  3. establishing that the key material is crypto graphically linked to the key identifier is a MUST for secure systems.
  4. its really good idea to have a full audit log
  5. "verification" is not just do the signatures match.... its what is the trust context for this... how old is this, how good is the opsec of the issuer, etc....

5 is IMO out of scope for DID Core... its where governance and other things come into play.

As long as DID Core allows both:

did:example:123#key-0 => dbDmZLTWuEYYZNHFLKLoRkEX4sZykkSLNQLXvMUyMB1

and

did:example:123?version=123123123#key-0 => dbDmZLTWuEYYZNHFLKLoRkEX4sZykkSLNQLXvMUyMB1

Both forms of issuance are supported.

I personally think allowing key identifiers like #key-0 or #my-terribly-chosen-pii-identifier is a bad thing... but I expect that ship has sailed for did core already, and we must look to DID Methods, like did:key or KERI for that.

brentzundel commented 3 years ago

This has been a long and fantastic conversation. My thanks to all participants. My question to @dhh1128 (and to everyone else on the thread, if you have an opinion): Is there specific spec text that should be added (or removed) to better clarify the spec such that this issue might be resolved?

jandrieu commented 3 years ago

One more note to figure out how we might adjust the specification.

I actually agree with @dhh1128 and @SmithSamuelM that we should be able to verify credentials signed by rotated keys while stopping verification of those with revoked keys.

I'm working on the use case document now and there are use cases in here that need this flexibility.

The problem is that the current definitions of verification in the VC spec don't address this and the current language in the DID Spec does not support it.

As a nudge for @dhh1128 and @SmithSamuelM to get closer to a PR, perhaps we can add something like this:

  1. Define rotate and revoke, the first as meaning the key should still be considered valid for credentials issued in a specific time window and the latter as meaning the key should not be considered valid for any operations, past or future.
  2. Add a means to either a. list... b. list by reference... or c. refer to a list of... ... rotated keys with their presumed valid timeframe.
  3. Clarify that the primary means of revocation is removal from the DID Document

If we can add a mechanism to distinguish these two key updates, then I believe we can support the concerns raised in this issue.

However, I tried getting support for that in #168, but failed.

Another (potentially flawed) option would be to change the semantics of verification for DIDs that have a timestamped ledger to explicitly allow for the use of keys in something other than the current DID Document, specifically one in an older version of that DID Document.

A few potential flaws:

  1. The timestamp of DID Document changes will not map well to the actual infractions of compromise except in the rare case where it is immediately known and changed. It is far more likely that a key is compromised for some time before it is realized that it has been compromised. In fact, the most likely way to discover such a compromise is evidence of use by an authentic, authorized party, such as a a university seeing a digital diploma they supposedly issued for someone who was never a student. Which means that in MOST cases, the first opportunity to revoke a set of keys is AFTER those keys have been used in bad faith. So, the timestamp on the DID Document can't address known bad credentials.

  2. You can't know the reason the key was changed. Was it revoked? Or rotated? These cases need to be handled differently and I don't believe we can achieve that without the ability for the controller to state exactly which in a manner that can be retrieved by a requesting party.

  3. It would mean there would be no way to rotate keys in a non-temporally query-able and authoritative DID Method. You can revoke by replacing the current keys, but without something like a distributed ledger, you wouldn't be able to authoritatively look into past states of the DID Document.

For the first two reasons, I don't believe there is enough information in the timestamping option available with some DID methods to properly handle the differences between revocation and rotation. And the third reason leaves a fairly big gap in the useful family of DID Methods.

Maybe there is some new option, perhaps thanks to KERI or some other insight I haven't figured out yet. I'm familiar with the pre-rotation concept @SmithSamuelM has championed, but I'm not seeing how that helps here.

To be clear: I get the value in the ask. I just don't see a concrete proposal, much less consensus, on how we would support what @dhh1128 and @SmithSamuelM want.

I was going to add the tag "Ready for PR" but I don't think we have agreement on how to address the need.

Btw, one small note for @OR13: in the VC spec we were careful to define and distinguish between "verification" as meant for "verifiable credentials" and "validation". Verification was limited to the cryptographic or procedural checks that can be computationally performed independent of the business rules of the verifier to establish a particular range of assurances about the credential, namely: it was issued by a specifically identified issuer, it has not been tampered with since issuance, and it has not been revoked. Questions of whether or not the issuer is deemed authoritative for the claims or whether or not the presenter of the credential has a suitable relationship to the Subject are business rule determinations we grouped into "validation". So, a VC verifies if the math and process checks out, but its validity for any particular purpose remains up to the Verifier to determine. So, yes, your #5 is half-right. "Validation" is definitely out of scope for DIDs. But "verification" as meant for VCs is vital if we are to support the range of use cases we've identified for DIDs.

dhh1128 commented 3 years ago

I will raise a PR today that attempts the solution @jandrieu recommends.