Open talltree opened 2 years ago
What happens if the self-assessment says "this registration is fully conformant", but in fact it isn't?
In any case, whether or not we introduce the self-assessment, I think editors still need to do their own assessment to evaluate basic completeness and conformance of the registration, as they have always done in the past. It is clear that with limited resources this can only be done on a "best effort" basis.
A detailed written self-assessment
This could be combined with the earlier idea for an automated checklist of conformance factors, which might help separate the "do we have the resources?" question from the "is this about namespace collision or about acceptance signalling?" question.
As before, I'm inclined to accept critera that can't be abused by editors to censor submissions, but reject uncertain critera that depend on an editor's whim. Spec submissions could work around the intent of any automated tool that aims to offer conformance clarity, but raising the bar on the effort required to submit a conformant specification would still lower the tension surrounding this issue.
If there is a tool then the next question is whether it's fair to change the tool's requirements. I'm not sure this tooling would work, but willing to review submissions that propose it.
What happens if the self-assessment says "this registration is fully conformant", but in fact it isn't?
Then the editors deny the registration (with, optionally, comments on the PR about why the self-assessment is incorrect). The registrant can then decide to fix and resubmit, or abandon.
In any case, whether or not we introduce the self-assessment, I think editors still need to do their own assessment to evaluate basic completeness and conformance of the registration, as they have always done in the past. It is clear that with limited resources this can only be done on a "best effort" basis.
To be clear, the self-assessment is NOT supposed to replace the editors making their judgement call about whether the proposed registration meets the baseline registration requirements. The self-assessment is only to have the registrant to do as much of the work as possible, thus making the editor's job of making the judgement call as easy as possible.
I think that this may be better left to a future re-charter - not to say that it is not a good topic to discuss, and that that discussion can be ongoing, but I am reticent at this point to add any additional criteria other than the minimal criteria that exists for a basic list of methods that exists today
The issue was discussed in a meeting on 2021-12-07
The working group has already decided not to assign value judgments to registrations. Therefore, there should be no additional registration column that could be construed by some as a value judgement. We already intentionally deleted the one such column that used to exist.
The only criteria that should be applied is whether there's an actionable definition provided of the item to be registered that meets the normative requirements of the specification. If so, we should register it. If not, not.
I agree with @mprorock that if we want to start imposing value judgements on registrations, we should do so only after rechartering with an explicit mission and reason to do so.
The only criteria that should be applied is whether there's an actionable definition provided of the item to be registered that meets the normative requirements of the specification. If so, we should register it. If not, not.
@selfissued What you just defined in that sentence, Mike, is the "baseline registration requirement" being discussed here. Nothing more, nothing less. The issue has been how much of a burden it will/won't be on the editors to decide whether the "item to be registered" (in this case a DID method specification) meets the normative requirements of the DID 1.0 spec (section 8).
Some in the WG consider that decision to be a "value judgement". Others do not. What's your view on that?
I think it's worth pointing out that while the section 8 requirements set a good base of the structure of a method specification it doesn't delve in any way into the contents necessary to produce a method that can be defined to encourage interoperability between the various specs. For example, a method spec theoretically could require the usage of a particular verification method suite such as JsonWebKey2020
where as another method may require the usage of something like postQuantumCryptoSuite2025
. I think this is acceptable and I believe most people would agree with this even if it means that technically the methods aren't quite interchangeable with each other and therefore lack in the ability to be "interoperable" category.
There's also the question of whether what's been defined in the method spec is adequate to meet the requirements defined in section 8.2. For example, is this DID Document Registration section adequate to meet the definition of how to implement the DID Create operation? To me it only defines what happens(the key distinction being how helps you implement the method and what helps you understand the method) and is an example of a method that probably shouldn't have been admitted.
The issue was discussed in a meeting on 2021-12-14
For several weeks now we have danced around the issue of whether the DID Spec Registries editors (hereinafter just "editors") should make any type of judgement about the registration of a new DID method. @brentzundel put an exclamation point on this issue at the end of yesterday's DID WG call when he asked what the downside was to not making any judgement call at all, i.e., why couldn't we just accept any well-formed registration request?
The queue filled immediately with WG members pointing out the downsides of letting the DID Spec Registries fill with "junk" (which I will more charitably refer to as "bad faith registrations").
If indeed we have rough consensus that we want to prevent such degradation in registry quality, it follows that the editors will need to apply some judgement about whether a proposed DID method registration meets the registration criteria or not. Let's call this criteria the baseline registration requirement.
The purpose of this thread is to asynchronously discuss proposals for what the baseline registration requirement should be in the hope that we get far enough to pass a formal proposal on next week's (Dec 6) DID WG call.
I will start discussion off with this proposal:
PROPOSAL: The baseline registration requirements for a DID method registration are:
If the editor's determination is either...
...then the editors MUST deny the registration. The editors MAY suggest resubmission of a revised registration that will meet these requirements provided the editors believe the registrant is acting in good faith.
Note the new requirement that, in addition to the DID method specification itself (and the registration PR), the submission MUST include the detailed written self-assessment. The whole idea here is to raise the bar by making the registrant do the work to gather and explain the evidence of conformance rather than transferring the work to the editors. That kills two birds with one stone: