w3c / did-extensions

Decentralized Identifier Ecosystem Extensions
https://w3c.github.io/did-extensions/
Other
119 stars 194 forks source link

Explanation on the DID Methods in the registries' document #83

Open iherman opened 4 years ago

iherman commented 4 years ago

At present, §6 in the document is clearly different from the others. I presume the process described in §3 is not directly relevant for the methods, the table contains a column ("Status") whose meaning is not clear, and there is no more explanation. It is good to have this registry here, and I know it has a different origin to the other sections, but I believe it would need some urgent editorial care...

TallTed commented 2 years ago

[@kdenhartog] This comment above mine reads like spam that seems unrelated to the discussion. @iherman am I allowed to just delete it (I have the permissions to do this)?

[@iherman] I believe we should consider comment threads the same way as we handle email threads at W3C in this respect. The overall policy for those is to be extremely reluctant of removing anything from the archives (barring very exceptional cases); the same should be true here imho.

Note that the comment referred to by @kdenhartog has in fact been deleted (@kdenhartog was not referring to @rxgrant's https://github.com/w3c/did-spec-registries/issues/83#issuecomment-949772645). I think it likely this was done by GitHub admins, as they have tools for reporting such content (look under the three dots at upper-right of any comment) and/or users (look to the bottom of the left-hand column of any GitHub user profile page), which I had used to report that comment before @kdenhartog added his.

In general, I concur with @iherman that deletion should be extremely rare, but that can only be achieved if repo admins or the like can can easily hide (which should provide the option to reveal it for themselves to any reader at any time) and unhide such apparently-noise content to minimize its distraction effects ... and if the GitHub tools can be disabled, such that reports like mine don't lead to deletion of content the repo admin just wants to hide.

brentzundel commented 2 years ago

@kdenhartog @iherman @TallTed I hid the comment referred to in your conversation, and when I did there was an option to unhide it.

Today, I no longer see it the comment at all and am not sure why that is. Deleting a comment should create an event in the timeline saying that the comment was deleted and by whom.

TallTed commented 2 years ago

Today, I no longer see it the comment at all and am not sure why that is. Deleting a comment should create an event in the timeline saying that the comment was deleted and by whom.

@brentzundel -- Might be worth some followup with the GitHub powers-that-be? I'm betting it's their tooling and/or intervention that deleted it. Question is whether that should leave no trace, as now, or should leave similar evidence as would be there if one of us GitHub users (at whatever place in the repo's privilege hierarchy) deleted it. My understanding is that GitHub itself is Git-based, so it should be just another commit in the stack, so should be displayable....

rxgrant commented 2 years ago

As mentioned in today's WG call, I see a registry column that could announce any standardization process underway as unobjectionable, when the answer is not required for a DID method to be listed.

I agree with @OR13 's point that requesting the data is an excellent way for Rubric evaluation authors, and end users, to get more informed about the DID Method.

OR13 commented 2 years ago

The more I think about this, the more opposed to embedding value judgments in the did spec registries I am... including "v1 conformance" ... since we can't really confirm this, it seems dangerous to say anything about a registered method other than linking to its spec, and possibly the rubric entry for it....

I think we should keep all forms of evaluation (including recommended status or conformance status) to the did rubric.... and keep the did spec registries a pretty boring list of URLs.

msporny commented 2 years ago

v1 conformance

The automated check I had in mind had to do with whether or not there was an entry for the DID Method in the DID Test Suite report. The individual would submit a link as a part of the registration process... so perhaps a better term is "implemented" or "test report exists" or something more objective.

I'd like us to not get too wrapped up in what we call the objective measure just yet (as we can always change that), and rather, focus on what the objective measures are (which in my mind, are "links to things").

For example: link to the specification, link to a code repository that implements the method, link to the DID Method results in the DID Test Suite, link to live resolver data (to demonstrate that a live network exists for the DID Method) ... and so on.

talltree commented 2 years ago

I am getting more comfortable with @msporny's suggestion that a DID method registration consist entirely of a filled-out JSON template of "links to things" with two caveats:

  1. None of the links can produce a 404.
  2. The baseline for including a registry entry is a link to a DID method specification—and this the one area where I believe the registry maintainers should make a judgement of whether the specification as a document complies with the requirements in section 8 of DID 1.0. That doesn't mean the registry maintainers need to check the validity of all statements in the specification, just that it the document itself factually meets the requirements.

I believe this is how we keep a baseline of quality in the DID method registry (albeit a pretty low baseline).

kdenhartog commented 2 years ago

Position change from me incoming:

I've been watching some of the discussions in the did-wg-charter on what a "quality" did method is and the effects of picking and choosing winners via the standardization process. It's become clear to me that while standardization can be a clear way to identify quality methods, it should not be the only one because it's an inherently biased process. It's also likely that standardization will likely be used to promote or tarnish the brand of a particular method for the majority of people who want to rely on dids but not join us in the mud to debate and critically assess. Instead, I suspect many people who don't want to deeply evaluate the merits of many did methods will defer to the authority of people they deem as experts and that's effectively means looking at the registry to decide which method should be chosen. I consider the tradeoffs here to likely be more harmful in the long term than the short-term problems I'm faced with when trying to evaluate whether a did method is something I should advocate implementation support for.

Given the way I'm watching this play out, I'm changing my position and consider it acceptable to go ahead with the limited number of status categories that can be automated for now until we can find suitable methods to objectively indicate the quality of a method without intentionally promoting or tarnishing a methods brand.

iherman commented 2 years ago

The issue was discussed in a meeting on 2021-10-28

View the transcript ### 4. DID Method Registration. _See github issue [did-spec-registries#83](https://github.com/w3c/did-spec-registries/issues/83)._ > *Kyle Den Hartog:* #83 is the ongoing issue about this topic. **Brent Zundel:** What specifically do we have to do to make the registry process as straightforward and clear as possible, both both those who register, and for those who look at it.. **Manu Sporny:** This concrete proposal could address a number of challenges we have had with DID method registration. … There are complaints that we are not being strict enough about who can register. This was by design in the beginning, we wanted a low barrier of entry.. … This has created a problem that people can't tell the difference between DID method registrations.. > *Drummond Reed:* The challenge is QUALITY. **Manu Sporny:** What are the "good" ones that have way more implementation experience than e.g. someone's weekend project.. … We don't want to put a huge burden on those who register either.. … If we do an attribute-based registration process. E.g. This DID method has a specification, this specification has an implementation, it has a testnet, etc. These are clear yes/no questions.. > *Brent Zundel:* this did method passed the did core test suite?. **Manu Sporny:** If we do that, we can annotate the DID method registry in an objective way. … We could add tiny JSON files to registrations that are used to render tables. … This could make the process more manageable and objective.. > *jyasskin:* +1 manu. **Kyle Den Hartog:** +1 to manu, that's a really good starting point. > *Ryan Grant:* +1 to attribute-based registration process. **Kyle Den Hartog:** My frustration is that it doesn't get us the full way there to decide what's a "quality" DID method.. … There is a need for better specifications. Many methods have security considerations that are a single sentence. Implementation guidelines sometimes just point to a single library.. … Rather than us deciding on quality, we lean on standards organizations that have WGs that can look at methods.. … E.g. if a certain method has gone through a standardization process, it achieves a higher status.. **Drummond Reed:** Encourage people to contribute to the Github issue.. … We should have a process that is as objective as possible, but it should also have an objective quality bar. E.g. to simply point to a specification, some of those are very lacking.. > *Brent Zundel:* maybe the JSON could also point to a rubric evaluation. > *Manu Sporny:* +1 to that, brent. **Drummond Reed:** We wanted to be inclusive in the beginning. I've been an advocate of keeping the current table, but start a new table that has a baseline bar. You must revise your specification for all DID Core 1.0 requirements, and you can't handwave at Security+Privacy Considerations.. > *Kyle Den Hartog:* +1, that seems like a potential quality metric if we're not going to be able to achieve consensus on the reliance of standards bodies. > *Kristina Yasuda:* Agree with how Manu framed the problem statement, and +1 to Kyle that we need to do more that the initial proposal. there is a need for an organized structured process/body of ppl reviewing what gets accepted as a DID method.. **Drummond Reed:** I don't think it's going to be a large burdens, but you should only go into the new table if you are 1.0 compliant.. … Then our attention should be on objective characteristics on which registry maintainers could make objective decisions.. … DID method authors should be free to standardize wherever they want. We should encourage the process of maturing DID methods, so that the market can compete.. **Orie Steele:** I agree with some of what drummond said. Other things make me nervous. In Privacy+Security Considerations, there is sometimes only one sentence. Sometimes that's okay, and sometimes it is not.. … My experience is with JOSE/COSE registries. Merges into them are controlled by a set of individuals who establish consensus. The entries of terms points to a specification, which doesn't have to be at a specific standards organization.. … We're now at a point where we need a larger amount of editors, with a higher number of required consents before we accept something.. … The JOSE/COSE registry is very successful, I hope we can be like that.. … The number #1 way of improving quality is to add editors, and require all to approve.. > *Kristina Yasuda:* really well-said, Orie.. **Manu Sporny:** I wanted to respond to Kyle. I'm nodding in agreement with a lot. The original proposal is something we can execute on today.. > *Drummond Reed:* I mostly agree with Orie, but I don't think every registry maintainer should be required to approve every listing. Just a threshold.. **Manu Sporny:** With that proposal we will end up with either the same document or a better one that has labels e.g.. … We don't have to strife for perfection right now. … The proposal is such that it doesn't matter if we have 1 or 2 tables. We can generate them programmatically based on the data.. > *Drummond Reed:* +1 to generating the table(s) programmatically. **Manu Sporny:** We have a concrete proposal in front of us that can give us immediate improvements that we can continue to iterate on. > *Orie Steele:* drummond, we need acountability, otherwise a maintainer can never approve things... and still be listed as an editor... we need the burden to be shared equally.. **Ryan Grant:** Requiring validation from a standards organization is a difficult bar for some decentralized protocols.. > *Daniel Buchner:* +1 to Ryan's comment. **Ryan Grant:** Some decentralized protocols are based on VDRs that disrupt traditional institutions.. … I'm a strong proponent of manu 's objective criteria. **Eric Siow:** This is a question that hopefully can educate me. Is this issue related to one of the objections (diverging instead of converging)?. > *Ryan Grant:* Eric_Siow, I think it is the essence of one of the objections.. **Eric Siow:** If that's the issue, then if the group can define a way to come up with objective methods, that might be helpful.. > *jyasskin:* +1 that non-standardized methods should be acceptable on the registry, just distinguished from ones that match manu's and kdenhartog's criteria.. > *Orie Steele:* limiting registration is not an objective, imo.... letting the market pick methods is.. **Kyle Den Hartog:** Responding to manu, I wholeheartedly agree that editors should be able to handle this in a programmatic way. Managing this is a tragedy of the commons problem. Leaning on programmatic approach is better.. … A good litmus test of what is "high quality" is "can I produce an interoperable implementation just by reading the spec?". The test suite can help with this. Being able to lean on Rubric evaluations also gets us close to where I want us to get.. … We should reach a high bar, without excluding methods that can't go through a standards body.. > *Orie Steele:* See [this IANA registry for comparison...](https://www.iana.org/assignments/cbor-tags/cbor-tags.xhtml). **Drummond Reed:** Wanted to Eric_Siow 's really good question. It's easy to look at a registry with ≈114 registered methods and seeing divergence. I want to make it clear that comparing DID methods to URIs/URNs, that comparison makes sense in some parts (URI schemes, URN namespaces, DID methods), but they are also different.. > *Daniel Buchner:* +1 to Drummond. **Drummond Reed:** This design was intentional. Every DID method is an attempt to provide a verifiable identifier using any combination of cryptography and VDR. There are many ways of doing that. We wanted to accelerate and standardize the competition. We built an abstraction layer on top of all of them, that's the primary reason of the specification.. > *Ned Smith:* We have a similar challenge working with the sea of cryptographic algorithms. Different algorithms have different purposes so they are grouped by intended function. Beyond that specs need to define negotiation of which algorithm to use.. > *Manu Sporny:* +1 to what Drummond is saying.. **Drummond Reed:** We want the market to choose and let the best DID methods rise to the top. This is different from encouraging divergence.. **Eric Siow:** Can you standardize the ones that have some objective measure (e.g. widely implemented and use), vs. those that are not widely used could be standardized later?. **Drummond Reed:** I wanted to talk about standardization. The existence of a standard (effort) associated with a DID method is another one of those objective criteria. I want to see W3C standardize more DID methods, but some DID methods are also going to happen elsewhere.. … I don't think you should HAVE to standardize a DID method.. > *Joe Andrieu:* +1 to decentralized innovation. **Drummond Reed:** The marketplace can develop DID methods anywhere they want, but we want an objective process for adding them to the registry. If there is a standard, then we will have a way to point to it.. > *Ryan Grant:* See [relevant DID Rubric issue to discuss standardization (whether or not the DID WG requires anything here)](https://github.com/w3c/did-rubric/issues/63). **Drummond Reed:** Once we improve the quality of the registry, that will help the market make its decisions.. > *Kyle Den Hartog:* +1 to not requiring them. It's worth stating to that while I believe a standards body can be a way to display quality it's not the only one. Another example metric that can help evaluate quality is number of implementations submitted to a test suite. > *Orie Steele:* See [the charter issue raised here](https://github.com/w3c/did-wg-charter/issues/17). **Drummond Reed:** There are also many URI schemes.. **Manu Sporny:** We optimized the registry to learn early about DID methods that are being created. We wanted to know about DID methods that are being created.. … We can provide signals in the registry that tell you whether or not a DID method has reached a certain level of maturity.. > *jyasskin:* The IETF has a history of putting too high a bar on acceptance to some of their registries, and I believe they mostly regret that. So +1 to manu.. **Manu Sporny:** I want to push back hard against making it harder for people to register DID methods. It should be easy to sort by criteria that matter to people.. > *Daniel Burnett:* +1, we want to ensure that experimental did methods can get registered. > *Manu Sporny:* not if we make it all optional for registration :). **Orie Steele:** We can't sort on criteria, unless we require people to provide them, which will make it harder for people to register.. > *Manu Sporny:* The only mandatory thing for registration is a spec that meets DID Core... everything else is optional.. > *Drummond Reed:* I mostly want to see the baseline criteria for registration be a v1.0 compliant DID method specification. All other registration attributes should be optional.. **Orie Steele:** The challenge I see is that the registry is attempting to do more than just being a registry. See JOSE/COSE which is simple. If we add criteria, it will not just be about adding a link to a spec, it will also about additional tasks for the editors.. > *Philippe le Hégaret:* See ["The Registry Track"](https://www.w3.org/2021/Process-20211102/#registries). **Orie Steele:** To some degree, the Rubric has begun to capture some of the things we were also envisioning for the registry.. > *Drummond Reed:* +1 to the DID Spec Registries NOT being the place that you go to for advice and guidance on selection of DID methods. We want the market to compete on offering those services.. **Orie Steele:** It might be better to keep it a very boring registry, and refer to the Rubric for a better way to add comparision, sorting, etc.. > *Ryan Grant:* Orie: +1 to both adding a column allowing one to note a standards process underway (or achieved) in the registry, as well as to speaking to this more in the Rubric. > *Drummond Reed:* Yes, I like the idea of adding a column for being able to point to one or more published evaluations against the Rubric.. > *Orie Steele:* maybe we can point from the registry to the rubric, instead of expanding the registry requirements, and move that consideration to the rubric.. **Brent Zundel:** I think we got some good data points. We seem to have agreement around a desire for registration to remain simple, to benefit those who are making those registrations happen (the editors). … But we do need some way of making the registry easier to consume. A number of directions were proposed, I think we will be able to come to consensus.. … Thanks all for coming, we had some great conversations. Next week we will be back to our regular schedule of meetings.. … We invite you to join the DID WG.. > *Ryan Grant:* thanks everyone!. **Brent Zundel:** Thanks to scribes, thanks to all, see you next week.. ---