Open iherman opened 4 years ago
[@kdenhartog] This comment above mine reads like spam that seems unrelated to the discussion. @iherman am I allowed to just delete it (I have the permissions to do this)?
[@iherman] I believe we should consider comment threads the same way as we handle email threads at W3C in this respect. The overall policy for those is to be extremely reluctant of removing anything from the archives (barring very exceptional cases); the same should be true here imho.
Note that the comment referred to by @kdenhartog has in fact been deleted (@kdenhartog was not referring to @rxgrant's https://github.com/w3c/did-spec-registries/issues/83#issuecomment-949772645). I think it likely this was done by GitHub admins, as they have tools for reporting such content (look under the three dots at upper-right of any comment) and/or users (look to the bottom of the left-hand column of any GitHub user profile page), which I had used to report that comment before @kdenhartog added his.
In general, I concur with @iherman that deletion should be extremely rare, but that can only be achieved if repo admins or the like can can easily hide (which should provide the option to reveal it for themselves to any reader at any time) and unhide such apparently-noise content to minimize its distraction effects ... and if the GitHub tools can be disabled, such that reports like mine don't lead to deletion of content the repo admin just wants to hide.
@kdenhartog @iherman @TallTed I hid the comment referred to in your conversation, and when I did there was an option to unhide it.
Today, I no longer see it the comment at all and am not sure why that is. Deleting a comment should create an event in the timeline saying that the comment was deleted and by whom.
Today, I no longer see it the comment at all and am not sure why that is. Deleting a comment should create an event in the timeline saying that the comment was deleted and by whom.
@brentzundel -- Might be worth some followup with the GitHub powers-that-be? I'm betting it's their tooling and/or intervention that deleted it. Question is whether that should leave no trace, as now, or should leave similar evidence as would be there if one of us GitHub users (at whatever place in the repo's privilege hierarchy) deleted it. My understanding is that GitHub itself is Git-based, so it should be just another commit in the stack, so should be displayable....
As mentioned in today's WG call, I see a registry column that could announce any standardization process underway as unobjectionable, when the answer is not required for a DID method to be listed.
I agree with @OR13 's point that requesting the data is an excellent way for Rubric evaluation authors, and end users, to get more informed about the DID Method.
The more I think about this, the more opposed to embedding value judgments in the did spec registries I am... including "v1 conformance" ... since we can't really confirm this, it seems dangerous to say anything about a registered method other than linking to its spec, and possibly the rubric entry for it....
I think we should keep all forms of evaluation (including recommended status or conformance status) to the did rubric.... and keep the did spec registries a pretty boring list of URLs.
v1 conformance
The automated check I had in mind had to do with whether or not there was an entry for the DID Method in the DID Test Suite report. The individual would submit a link as a part of the registration process... so perhaps a better term is "implemented" or "test report exists" or something more objective.
I'd like us to not get too wrapped up in what we call the objective measure just yet (as we can always change that), and rather, focus on what the objective measures are (which in my mind, are "links to things").
For example: link to the specification, link to a code repository that implements the method, link to the DID Method results in the DID Test Suite, link to live resolver data (to demonstrate that a live network exists for the DID Method) ... and so on.
I am getting more comfortable with @msporny's suggestion that a DID method registration consist entirely of a filled-out JSON template of "links to things" with two caveats:
I believe this is how we keep a baseline of quality in the DID method registry (albeit a pretty low baseline).
Position change from me incoming:
I've been watching some of the discussions in the did-wg-charter on what a "quality" did method is and the effects of picking and choosing winners via the standardization process. It's become clear to me that while standardization can be a clear way to identify quality methods, it should not be the only one because it's an inherently biased process. It's also likely that standardization will likely be used to promote or tarnish the brand of a particular method for the majority of people who want to rely on dids but not join us in the mud to debate and critically assess. Instead, I suspect many people who don't want to deeply evaluate the merits of many did methods will defer to the authority of people they deem as experts and that's effectively means looking at the registry to decide which method should be chosen. I consider the tradeoffs here to likely be more harmful in the long term than the short-term problems I'm faced with when trying to evaluate whether a did method is something I should advocate implementation support for.
Given the way I'm watching this play out, I'm changing my position and consider it acceptable to go ahead with the limited number of status categories that can be automated for now until we can find suitable methods to objectively indicate the quality of a method without intentionally promoting or tarnishing a methods brand.
The issue was discussed in a meeting on 2021-10-28
At present, §6 in the document is clearly different from the others. I presume the process described in §3 is not directly relevant for the methods, the table contains a column ("Status") whose meaning is not clear, and there is no more explanation. It is good to have this registry here, and I know it has a different origin to the other sections, but I believe it would need some urgent editorial care...