w3c / did-extensions

Decentralized Identifier Ecosystem Extensions
https://w3c.github.io/did-extensions/
Other
119 stars 194 forks source link

Explanation on the DID Methods in the registries' document #83

Open iherman opened 4 years ago

iherman commented 4 years ago

At present, §6 in the document is clearly different from the others. I presume the process described in §3 is not directly relevant for the methods, the table contains a column ("Status") whose meaning is not clear, and there is no more explanation. It is good to have this registry here, and I know it has a different origin to the other sections, but I believe it would need some urgent editorial care...

OR13 commented 4 years ago

@iherman can you rephrase this as a directive for me or @msporny ?

happy to take a stab at a PR, if I can figure out what to do.

iherman commented 4 years ago

Let me try to be more specific.

I hope this helps.


As an aside, I wonder whether the registration process for DID methods should not be more demanding. I just glanced into some descriptions and, I must admit, I simply do not see what makes them interesting, useful, why they are there. In some cases the only information I really get is "it is a DID implementation on the XYZ blockchain". This is not very helpful. I believe we should require a 1-2 paragraph description for each of the methods that would describe why that DID method is interesting, unique in some way, etc.

OR13 commented 4 years ago

@iherman I have tried to address some of your concerns here. https://github.com/w3c/did-spec-registries/pull/115/files

iherman commented 4 years ago

@iherman I have tried to address some of your concerns here. https://github.com/w3c/did-spec-registries/pull/115/files

Ack.

brentzundel commented 3 years ago

PR that addressed this issue was closed. Still need a PR for this. @peacekeeper will take a look

msporny commented 3 years ago

This was not resolved, the PR noted above wasn't ever merged. Some text might have made it into DID Core to address the issue.

iherman commented 3 years ago

The issue was discussed in a meeting on 2021-08-03

View the transcript #### 5.1. Explanation on the DID Methods in the registries' document (issue did-spec-registries#83) _See github issue [did-spec-registries#83](https://github.com/w3c/did-spec-registries/issues/83)._ **Brent Zundel:** explanation on did methods in registries document, raised by ivan **Ivan Herman:** more than a year ago … the last comment is from orie saying i have tried to addressed, I acknowledged it.. seems like this issue should have been closed a long time ago **Manu Sporny:** the PR, there was a massive permathread in it and it got closed, never went in **Ivan Herman:** vaguely remember when I raised it registration of terms and methods and they looked different from one another but don't know what happened since then **Brent Zundel:** The PR that tried to address the issue was closed rather than merged **Manu Sporny:** this was orie and markus going back and forth over normative language in did core... … my expectation is something got into did core and it was potentially resolved **Markus Sabadello:** I can't check right now but will look later
peacekeeper commented 3 years ago

I think some of this has been resolved (e.g. the CBOR column has been removed, and there is also some language now on how DID methods will get accepted into the table). But some other issues here are probably still open, e.g. about the structure and contents of the table.

A few weeks ago there was an idea that the "Status" field in the table could contain the value "tested" or "implemented", if an implementation of the DID method was submitted to the test suite.

Probably need to discuss this topic again on a WG call with @iherman who raised this issue, to see how much of it still needs to be addressed.

peacekeeper commented 3 years ago

Related issue is https://github.com/w3c/did-spec-registries/issues/174, which also discusses tracking contact information for DID methods and other additions to the registry.

talltree commented 3 years ago

See also the suggestion I just made in #265 .

iherman commented 3 years ago

The issue was discussed in a meeting on 2021-09-14

View the transcript ### 7. Explanation on the DID Methods in the registries' document (issue did-spec-registries#83) _See github issue [did-spec-registries#83](https://github.com/w3c/did-spec-registries/issues/83)._ **Brent Zundel:** issue has been around a while. DID method section has a status column. 99% says provisional for status. … what do we want that column to say? Do we want to explain provisional, etc.? **Drummond Reed:** Added a reference to where I had put another comment. Original suggestion was to create a new table that li-sts methods where authors have upgraded their methods to match the Recommendation. … old table stays as is, but provisional changes to "upgraded" for such methods and people should look at new table. … it will help us call out name squatting … quality of the did method specs varies. This will leave us with methods that meet our now higher bar in the main table. **Joe Andrieu:** worried about how you proposed it. Worried about new name squatting. … we do need consensus on the legitimate values for status and how they're determined. … when we first added provisional, it was to deal with methods that might not be consistent with current spec draft. … need to figure out states, what they mean, and how they are assigned. > *Ted Thibodeau Jr.:* it seems that members of the new list should either be removed from the old (which is thus not static), or included on both with current status shown (and the old is again not static) **Ivan Herman:** since we want this to become a formal registry, the doc itself has a registration process and that process says nothing about registering a new method. There is just a table, but no registration process. We need a clear policy for how things get into the table. … we made a decision it would become a registry, but many details remain > *Drummond Reed:* Agreed, Ted. The proposal I made is that the only change to any methods listed in the Old table is that their status column value is changed if that method becomes listed in the New table. **Brent Zundel:** we should use provisional (written before there was a spec), v1.0 compliant (submitted after the Recommendation), and deactivated (for no longer in use). > *Justin Richer:* what if the states are "Pre-1.0", "1.0", and "Deprecated"? > *Justin Richer:* basically what Brent Said > *Justin Richer:* or burn. Somebody said it. > *Justin Richer:* but basically call it "version" instead of "status" might help, too -- but that's a different argument > *Ted Thibodeau Jr.:* implementations submitted before DID Core 1.0 CR/PR could be listed as such, or de-listed for registry purposes **Drummond Reed:** if we keep the current table, prefer what justin typed above … no matter how we do it, need a clear policy on how to get the status changed. **Brent Zundel:** next step should be a pull request proposing new language **Joe Andrieu:** whoever the owner is of an entry, they should be able to self-assert which version of spec they claim to be compliant with … since these will live for years and we need to plan for the future > *Drummond Reed:* +1 to being able to continuous upgrade the status values for future versions **Brent Zundel:** any volunteers to write a PR? > *Ryan Grant:* I'll volunteer **Brent Zundel:** reminder for Imp. Guide PR review and request for other PRs, we will keep you informed of the progress of the spec … thanks for remaining professional. ---
talltree commented 3 years ago

@rxgrant and @jricher: at the end of the DID WG call last week, both of you had a specific suggestions for the values of the Status tag in the DID method table and the rules that the Registry editors should follow to assign those values. Could one of you submit a PR?

rxgrant commented 3 years ago

@rxgrant and @jricher: at the end of the DID WG call last week, both of you had a specific suggestions for the values of the Status tag in the DID method table and the rules that the Registry editors should follow to assign those values.

My read of the end of the conversation was that there was general approval to add a (blank) column to the table of DID Methods which would link to their updated-for-1.0 DID Method spec, that the (generally) "Status: PROVISIONAL" column should be removed, that old links should be labeled as pre-1.0 versions, and that since DID Method authors should self-certify, the registry should not attempt to declare their status. I will submit a pull request with these changes and briefly describe the change above the table.

rxgrant commented 2 years ago

As part of this work, I've reviewed all the existing DID Method Specifications and noticed that several do not resolve to existing web pages. I believe that #83, as it's currently scoped, does not cover editorial judgement on changing the status of these DID Methods, but point out that we need a process that has certain minimum standards.

rxgrant commented 2 years ago

See pull request #341

talltree commented 2 years ago

Per a request from @OR13 in PR #341, and in light of the feedback received in the formal objections to the DID 1.0 spec, for the third time I will put forth the proposal that we split the DID method registry table into two tables:

  1. A new table for all v1.0-compliant methods (listed FIRST).
  2. The existing table for all current provisional registrations (listed SECOND).

Proposed rules for these two tables

  1. All new registrations MUST be v1.0-compliant and MUST go into the new table—the old table is locked.
  2. All existing registrants who submit a new v1.0-compliant version MUST be added to the new table and MUST be removed from the old table.
  3. Both tables SHOULD have the same set of columns:
    1. Method Name
    2. Status
    3. Spec Link
    4. Author Link(s)
    5. Verifiable Data Registry
  4. Status values for the old table:
    1. Provisional
    2. Deprecated
  5. Status values for the new table:
    1. v1.0-compliant
    2. In production
    3. Test suite available
    4. Approved standard
    5. Deprecated

Rationale

Besides giving greater visibility to v1.0-compliant DID method specifications, the two-table approach would enable us to put an explanatory paragraph before each table that should reduce confusion, not increase it.

The para before the first table can explain that these are DID method specifications submitted AFTER the DID 1.0 spec reached PR and that meet all the requirements of a compliant DID method.

The para before the second table can explain that these were all DID method specifications submitted prior to completion of the DID 1.0 spec, and thus are all provisional until they submit a v1.0-compliant DID method specification.

This way it becomes much easier for implementers to "separate the wheat from the chaff".

rxgrant commented 2 years ago

proposal that we split the DID method registry table into two tables

I'd be happy to implement this in the existing pull request. Any objections?

talltree commented 2 years ago

I'd be happy to implement this in the existing pull request. Any objections?

@rxgrant Not from me! I suggest we see if there are any objections or modifications on tomorrow's DID WG call. Then let's go for it.

iherman commented 2 years ago

The issue was discussed in a meeting on 2021-10-19

View the transcript #### 4.1. change registry columns per issue #83 (pr did-spec-registries#341) > *Orie Steele:* PR reviews: [https://github.com/w3c/did-spec-registries/pull/341](https://github.com/w3c/did-spec-registries/pull/341) _See github pull request [did-spec-registries#341](https://github.com/w3c/did-spec-registries/pull/341)._ _See github issue [did-spec-registries#83](https://github.com/w3c/did-spec-registries/issues/83)._ **Daniel Burnett:** framing questions: what's necessary to continue the work? can everything else work on github as issues? **Orie Steele:** i want to thank ryan for an issue-first, PR-second approach that resolves many registry problems … we haven't always been timely about the registry, so plz plz review those PRs, it helps us with many of our core issues as a WG … i won't summarize ryan's very broad PR because it covers a lot of ground but review it soon, particularly if you have a did method that might get booted by its being merged! **Drummond Reed:** i think this PR is urgent vis-a-vis the formal objections! … I shared a link to an alternative solution opened in another issue **Manu Sporny:** since we're on that issue (pr 341), my only suggestion is to replace "non-compliant" with "provisional" … or rather, NOT to replace it-- we will look bad if we overnight switch most of our registry to "non-compliant" > *Drummond Reed:* +1 to not using "non-compliant". But [https://github.com/w3c/did-spec-registries/issues/83](https://github.com/w3c/did-spec-registries/issues/83) proposes a more comprehensive solution. > *Ted Thibodeau Jr.:* "experimental"? > *Ted Thibodeau Jr.:* "beta-compliant"? **Manu Sporny:** replace "non-compliant" with "provisional" in the PR, i mean > *Drummond Reed:* +1 to "trolling the DID method spec authors" > *Drummond Reed:* Comment being discussed: [https://github.com/w3c/did-spec-registries/issues/83#issuecomment-946075510](https://github.com/w3c/did-spec-registries/issues/83#issuecomment-946075510) **Ryan Grant:** I was trolling, it's true, or put a little fire under them. I would support drummond's solution and I think it addresses manu's objection > *Orie Steele:* I will happily review a PR drummond, you are welcome to open one. **Manu Sporny:** maybe we are not thinking enough about ungenerous readings-- we don't want people marked as "noncompliant" for having been compliant and having passed a test suite before breaking changes > *Michael Prorock:* +1 manu - wording and appearances are very important right now **Manu Sporny:** and we also don't want to hand a "gotcha" opportunity to those who will comb through our github looking for evidence that we aren't running a proper WG here … or that we've wasted effort > *Daniel Burnett:* +1 manu **Drummond Reed:** I put a link to a sidestepping solution-- a 1.0 compliant table distinct from the existing table that includes all the provisionals as-is … as long as there is some contextualizing explanation above both > *Orie Steele:* basically, we need PRs... there are already enough issues.... **Drummond Reed:** I will work with Ryan on doing this in PRs … if the group supports it **Ryan Grant:** First of all, Manu thanks for correcting the record on the amount of interop that these specs have already achieved … I wasn't trolling to be annoying, I was hoping to avoid value judgments or partisanship in the editing of this registry … just to explain the choice of words, even if i support solutions using a diff word
talltree commented 2 years ago

@rxgrant We didn't get any objections in the DID WG meeting today, but we didn't get any strong reactions in general. So here's my proposal: if you're willing to update your PR, let me know if you want me to draft text for the intro paragraphs for each of the two tables. Or alternately just go ahead and update your PR and I can comment on that. Whichever you prefer.

kdenhartog commented 2 years ago

I agree in principle with Drummond's proposal and I think it get's us mostly there. Some further refinements I'd suggest -

  1. change the SHOULD to a MUST. I don't see any reason that we shouldn't include those details.
  2. New statuses need to be clearly defined what they mean. 2a. what constitutes "production" readiness? 2b. What constitutes an acceptable test suite? 2c. How do we define "approved" standard for status 4? E.g. a spec that contains a single sentence for a security/privacy considerations section isn't worth "approving". A spec that doesn't have normative statements isn't worth "approving". A spec that has a strong dependency to a particular implementation isn't worth "approving". (my opinion here - curious if WG consensus agrees)

So at a high level I'm at a major +1 to this proposal (and have been for awhile now - thanks for re-proposing it for the 3rd time @talltree) and think with a bit of more specifics about the details in a follow up PR to flesh out the details of point 2a through 2c of this registry we can make this work. Would others here prefer I open a separate issue to discuss the requirements or do we want to consider that here if people agree this is necessary?

rxgrant commented 2 years ago

Here are the methods that IMHO don't have a reasonable spec at a reasonable URL that, at minimum, addresses how to read a DID Document from the VDR:

some variant of a 404

didn't bother posting a DID Method spec that describes how to read a DID Document from the VDR

posted a DID Method specification that takes a form too confusing for the author of this comment to figure out how to retrieve the DID Document

rxgrant commented 2 years ago

Earlier I made a comment about a DID Method with a very short name. But they're building stuff, so the comment wasn't appropriate.

talltree commented 2 years ago

with a bit of more specifics about the details in a follow up PR to flesh out the details of point 2a through 2c of this registry we can make this work. Would others here prefer I open a separate issue to discuss the requirements or do we want to consider that here if people agree this is necessary?

@kdenhartog I see no reason not to just to continue to refine this proposal here in this issue and then, if @rxgrant is up for it, he can revise his PR #341 to reflect the outcome.

I also agree that we need to define reasonably complete compliance requirements for each of the status tags in the new table. Following are strawman proposals to flesh out your suggestions.

Note that for all of these, we should specify that "Determination of compliance shall be made by the then-current DID Spec Registries editors."

Requirements for the "v1.0 compliant" status tag

  1. Registrant MUST submit a URL for a publicly available DID method specification hosted in a reasonably stable repository (e.g., standards body, GitHub, dedicated microsite).
  2. The DID method specification MUST comply with the requirements specified in section 8 of the DID 1.0 specification.
  3. The proposed DID method name MUST NOT conflict with the name of any previously registered method.

Requirements for the "In production" status tag

  1. Registrant MUST submit a URL for publicly available documentation, hosted in a reasonably stable repository, of production usage of the DID method.
  2. Such documentation MUST include instructions for how an implementer may engage to use the DID method in production.

Requirements for the "Test suite available" status tag

  1. Registrant MUST submit a URL for publicly available documentation, hosted in a reasonably stable repository, of a test suite for the DID method that is publicly available to any implementer.
  2. Registrant MUST also submit reports signed by two independent implementers documenting the results of their implementations having successfully passed the test suite.

Requirements for the "Approved open standard" status tag

  1. Registrant MUST submit a URL for the publicly available DID method specification that has been formally approved by a non-profit standards development organization.

I'm sure I missed some but at least it's a start.

kdenhartog commented 2 years ago

I broke the conversation down into individual status tags. I've added a few additional requirements to link the statuses together and then added a few specific questions, but it's probably better to break these down into separate PRs since some are going to be more controversial then others. I've raised some of the edge cases I think may arise in my responses below.

V1.0 Compliant Response > * Registrant MUST submit a URL for a publicly available DID method specification hosted in a reasonably stable repository (e.g., standards body, GitHub, dedicated microsite). > > * The DID method specification MUST comply with the requirements specified in [section 8 of the DID 1.0 specification](https://www.w3.org/TR/did-core/#methods). > > * The proposed DID method name MUST NOT conflict with the name of any previously registered method. Agree with this one as it stands ^.
In Production response > ### Requirements for the "In production" status tag > > 1. Registrant MUST submit a URL for publicly available documentation, hosted in a reasonably stable repository, of production usage of the DID method. > > 2. Such documentation MUST include instructions for how an implementer may engage to use the DID method in production. Agree with these and have added one improvement that I'm sure you implied as well: ``` ### Requirements for the "In production" status tag 1. All requirements from the "V1.0 Compliant" status MUST be met. 2. Registrant MUST submit a URL for publicly available documentation or code, hosted in a reasonably stable repository, of production usage of the DID method. 3. Such documentation MUST include instructions for how an implementer may engage to use the DID method in production. ```
Test Suite Available response > ### Requirements for the "Test suite available" status tag > > 1. Registrant MUST submit a URL for publicly available documentation, hosted in a reasonably stable repository, of a test suite for the DID method that is publicly available to any implementer. > > 2. Registrant MUST also submit reports signed by two independent implementers documenting the results of their implementations having successfully passed the test suite. Agree with this one in principle and have added a few improvements: ``` ### Requirements for the "Test suite available" status tag 1. All requirements from the "In Production" status MUST be met. 1. Registrant MUST submit a URL for publicly available documentation, hosted in a reasonably stable repository, of a test suite for the DID method that is publicly available to any implementer. 2. Registrant MUST also submit reports signed by two independent implementers documenting the results of their implementations having successfully passed the test suite. 3. The test suite is expected to be testing all of the normative statements made within the specification document. ``` This one definitely is going to need a bit more fleshing out still. Here's some follow up questions: 1. Is a did method that's built as a patented method acceptable? 2. Is it acceptable that both implementations depend on the same library? 3. What qualifies as a good test suite here? -> I'm hoping most of these can be cleared up by just relying on W3C requirements for this.
Approved open standard response > ### Requirements for the "Approved open standard" status tag > > 1. Registrant MUST submit a URL for the publicly available DID method specification that has been formally approved by a non-profit standards development organization. Slight modification but looks good: ``` ### Requirements for the "Approved open standard" status tag 1. All requirements for the "Test suite available" status MUST be met. 2. Registrant MUST submit a URL for the publicly available DID method specification that has been formally approved by a non-profit standards development organization. ``` Some edge cases worth considering here: 1. Does a standard that is paid for count? I want to say No, but don't really want to stir a political pot between ISO and W3C. 2. Does a standard that doesn't align with W3C's patent policy count? E.g. if the standard was built in a way that requires licensing to use commercially?
talltree commented 2 years ago

@kdenhartog Great stuff. I had no idea you do accordion-style subsections in a comment. (I can't see the source — offline let me know how you did that.)

In any case, I agree that it is looking best to break into multiple PRs. I would suggest this breakdown:

  1. One PR (maybe the one @rxgrant started #341) to change the layout to three subsections:
    1. A subsection for the new table, with its own intro paragraph.
    2. A subsection for the old table, with its own intro paragraph.
    3. A subsection following the other two for "DID Method Registration Policies"
  2. Under "DID Method Registration Policies", one PR for each set of requirements for a particular status tag:
    1. v1.0 Compliant
    2. In Production
    3. Test suite available
    4. Approved open standard
    5. Deprecated

So that's 6 PRs in total. Whatchathink?

kdenhartog commented 2 years ago

I had no idea you do accordion-style subsections in a comment. (I can't see the source — offline let me know how you did that.)

It's a feature that gets added by an extension I use in my browser called refined-github. You can manually achieve it with the following additional text as well (I've escaped it here with the three-tick code blocks, but it works as expected when not inside that).

<details>
<summary>My title of accordian subsection</summary>

My contents of the according subsection

</details>
msporny commented 2 years ago

I have deep concerns about "in production" and "approved open standard" because I've seen multiple DID Methods use both monikers for alpha software. I won't point fingers, but some of them have participated in our groups over the years.

As someone that has to review PRs for new methods, I don't want to fight people that are claiming that they have a "global standard" or are "in production".

Each of these statements should be backed up by some sort of link to proof.

If you're claiming "registered" you need to point to a spec that meets the minimum requirements for registration.

If you're claiming "fully specified" you need to point to a spec that meets all the DID Method spec requirements in DID Core (really don't like this one, it's a LOT of work for reviewers).

If you're claiming "implemented" you need to point to a source code repository where the implementation exists.

If you're claiming "v1.0 compliant" you need to point to a source code repository where the latest tests exist AND demonstrate no errors from the test suite.

... and so on.

All of this led me to wake up this morning with an idea:

What if we just shifted the DID Method registration process to be more data-driven. That is, we do some extensions to ReSpec that change entries to this format:

<div data-did-method-name="example" 
     data-did-method-spec="https://did.example/spec.html" 
     data-did-method-implementation="https://did.example/repo.html"
     data-did-method-test-suite="https://did-test-suite.example/reports#example"
/>... whatever extra info the DID Method registrant wants to add here</div>

... and so on. That way, we can auto-generate the labels (and tables) from data they provide to us.... sort, split, one table, two tables -- the algorithm we use to render can change. We could also externalize this to JSON files, which just creates a level of indirection that may or may not be useful (I'm leaning towards NOT doing that and just having everything in the HTML ReSpec source).

Thoughts?

mprorock commented 2 years ago

What if we just shifted the DID Method registration process to be more data-driven. That is, we do some extensions to ReSpec that change entries to this format:

<div data-did-method-name="example" 
     data-did-method-spec="https://did.example/spec.html" 
     data-did-method-implementation="https://did.example/repo.html"
     data-did-method-test-suite="https://did-test-suite.example/reports#example"
/>... whatever extra info the DID Method registrant wants to add here</div>

This could be great. I think an externalized JSON file might actually be best for this where we can define effectively a database with additional metadata for each DID method that could help feed resolvers etc. We could then have a build process on the respec that takes that JSON, validates it, and generates the respec section from that JSON. This could be similar to what we have done with the build process on the Traceability Vocab. That way we accomplish a few things:

  1. Set up the registry more as a true digital registry that can be leveraged by resolvers, test suites, and other tech programatically
  2. Add a test and build step, so that new PRs with a new or modified DID method entry can be validated for completeness and well formed data
  3. retain a single respec doc from a presentation to users standpoint without having to redirect to an external file

JSON format could look something like:

"methods": [
    {
        "name": "example_1",
        "spec": "https://did.example/spec.html",
        "authors": [
          { "name": "John Doe", "email": "johndoe@example.org" }
        ],
        "network": [ { "name": "Some DLT", "uri": "https://somedlt.example.org" } ],
        "status": "provisional",
        "implementation": "https://did.example/repo.html",
        "test": "https://did-test-suite.example/reports#example",
    },
    ...
]

this obviously could be extended as required to cover additional details and facilitate more capability discovery, etc off the registry

talltree commented 2 years ago

I am totally in favor of a data-driven approach to DID method registrations.

So how do we proceed with this as one or more PRs?

mprorock commented 2 years ago

So how do we proceed with this as one or more PRs?

open on this - possibly one PR to get the JSON format established, and then a second that updates the respec from that as part of the build process on commit @OR13 any thoughts?

msporny commented 2 years ago

If we're going the JSON file route, please don't dump everything into a single JSON file (we repeatedly have merge conflicts or have to teach people how to rebase when we do that). Rather, each DID Method gets its own JSON file, put 'em all in a subdirectory, please.

OR13 commented 2 years ago

@mprorock yep, I would build a directory of json files, and a dynamic index built from parsing it.

OR13 commented 2 years ago

Like we did here https://github.com/decentralized-identity/JWS-Test-Suite/blob/main/evaluate.js#L78

rxgrant commented 2 years ago

I can implement either the div elements or a list of (over one hundred) subdirectories. However, I am worried about obsfucating the build process and thus requiring that people learn ReSpec build intricacies in order to keep this running.

I think we'd need excellent documentation on either process. Who's willing to write that up? Which one is simpler, yet will still result in non-conflicting merges?

rxgrant commented 2 years ago

Approved open standard

I don't know what this means or how to write my code in order to pass this test.

msporny commented 2 years ago

I can implement either the div elements or a list of (over one hundred) subdirectories.

I meant 112 JSON documents in a single subdirectory labeled "didMethods" or something like that. :)

There is no build process w/ ReSpec, but someone will have to extend ReSpec to pull all 112 files in at page load time and translate that to HTML (which is what ReSpec does in realtime). Exceedingly bad examples on how to do that here:

https://github.com/w3c-ccg/vc-api/blob/main/common.js#L404-L422

and invoked here:

https://github.com/w3c-ccg/vc-api/blob/main/index.html#L70

with target markup here:

https://github.com/w3c-ccg/vc-api/blob/main/index.html#L343-L344

That is almost certainly a hacky way to do it, but ya gotta start somewhere, right?! :)

I agree that we shouldn't need an external build process to do this (or we've failed).

talltree commented 2 years ago

I can't help with the coding process, but I'm assuming that if we go this way (which again I favor), will we not still need to publish in the DID Methods section of the document a description of the registration process and the requirements that have to be met, yes?

If so, I'm willing to help work on that. But it sounds like we need a reset on what the registration properties are and what is required for each property.

kdenhartog commented 2 years ago

I hate to be the voice of dissent on details that affect the job of did method reviewers, especially when there's shared enthusiasm on a data driven approach. Bare with me here because I'm airing some controversial opinions here, but I think they need to be said.

Right now we've got a lot of dog**** methods that are accepted here because there's little measure of quality that's being set. My hope in setting some ground rules is to thread a fine line between IANA processes I've encountered which feel like a wizard's ritual that only the blessed can perform and the open floodgates approach that we have today.

The fact of the matter is expert review takes time and includes implicit bias, but what we have today and what's being proposed with an automated approach isn't working either because we're left with a lot of low quality half baked stuff that assumes tons of tribal knowledge into the inner workings of each method in order to implement.

So, while I'm absolutely empathetic to the reality that any form of expert review flies exactly against the ethos of decentralization and much of what this work tends to require a lot of human effort to achieve this, I view this as a necessary tradeoff to create a valuable ecosystem built on DIDs. In fact, I see it as an opportunity for us to raise the bar on what quality means for people authoring DID Methods.

Can we please consider the impact of the long term viability here by being transparent about what we think good did methods look like and place at least some bar of quality on what's necessary to register a did method? After all, a did doesn't suddenly become not a compliant did just because it's not blessed by the registry. It's just a did that no one knows how to interact with which is effectively the same as a did method that's published but nobody understands how to implement interoperably.

rxgrant commented 2 years ago

My hope in setting some ground rules is to thread a fine line between IANA processes I've encountered which feel like a wizard's ritual that only the blessed can perform and the open floodgates approach that we have today.

Continuous integration and test suites could prevent the politics while retaining the quality. I know how to do that for implementation libraries, but not for the specifications themselves.

msporny commented 2 years ago

@kdenhartog wrote:

Right now we've got a lot of dog**** methods that are accepted here because there's little measure of quality that's being set.

:laughing: ... :thinking:.oO(Rename the registry to "Dog**** DID Method Registry"?)

I sympathize with your viewpoint @kdenhartog, and I think much of what you wrote is valid.

I also agree with @rxgrant -- the more we can automate, the better off we'll be. I have ideas on how we could do that, but it's all work that people have to do (write DID Method spec parsers that check for DID Core Method requirements -- that's a 2-4 month project in and of itself).

All that said, the issues remain:

  1. We don't want to put a time burden on the people that are volunteering their time to manage this registry.
  2. We don't want to put a policing burden on the people that are volunteering to manage this registry. They will become the target of attacks and process escalations when people disagree that their DID Method doesn't fit the criteria.
  3. We don't want to discourage people from using the registry.

There is an analogy here that I think might help us, and that is the "5-star Linked Data" approach. In essence, it suggested a 5 star deployment scheme for Linked Data. The 5 Star Linked Data system is cumulative. Each additional star presumes the data meets the criteria of the previous step(s). Before it, people had heated debates about what is and isn't Linked Data, and those debates often excluded new communities. So, instead of drawing a line in the sand, what was proposed was a gradual entry into the ecosystem. I think we have the same sort of thing here -- For example... first you publish a provisional spec, then you implement it, then you demonstrate that your implementations output is conformant to DID Core v1.0, then you stand up a test net, then you provide a resolver for others to use, then you go into "production", then you provide multiple implementations and perhaps fully define your specification, and then you have a test suite demonstrating multiple implementations interop'ing, and then you take it through a global standardization process with consensus and expert review.

We want people registering at the provisional spec phase... and then what comes next might not happen in the order I mentioned above... but, IMHO, we do want to expose that in DID Spec Registries and perhaps use it as sorting/bucketing criteria.

When you're trying to build a open and inclusive community, it helps to have a gradual onboarding process that's inclusive instead of setting up fences to keep people out.

Food for thought...

talltree commented 2 years ago

@msporny I find your "5-star Linked Data" approach to be very compelling for all the reasons you mentioned. I do believe it can address @kdenhartog concerns about the quality of the entries by making it relatively objective how each additional star is achieved. (If someone is truly trying to game the system, that should be pretty easy for the editors to detect.)

Can you say a little more about how you'd recommend structuring the five stars? And what specifically we'd need to do to put that approach into place for the registry?

msporny commented 2 years ago

Can you say a little more about how you'd recommend structuring the five stars?

I have no firm ideas there other than "people seem to go through a basic progression to get to 'five stars'"? Maybe... I don't know if they do... the list I provided above kinda falls apart toward the end wrt. linear progression. So we might skip the stars thing? Don't know, haven't thought about it enough yet.

And what specifically we'd need to do to put that approach into place for the registry?

I think the JSON files per DID method with some variation of the contents mprorock and I suggested above gives us that general structure.

kdenhartog commented 2 years ago

I think in general where you're coming from is a safe bet for the maintainers of this registry over time and I get where you're coming from here by not wanting to turn this into an overtly political process that raises more headaches than it's worth. Additionally, I'm fully supportive of the idea of making this as automated as possible for very strict and transparent rules. I think there's a balance here that needs to be considered and at the very least getting the automated infrastructure in place is a good first step.

I'm hesitant to say that a big tent approach like what's done for the MIME types registry is going to end up being what we need here when the bare minimum for interoperability of DIDs and DID Documents is far more involved. I think this is where the idea of having the standard developed through a standards org is going to be an important factor here because that's the step where rigor can be applied without placing the burden on the editors here.

So what if we stick with things operating at a machine readable approach for the initial phases which allows for early registration and good open tent approach, but also allow ourselves to lean on standards bodies with good processes in place to define what an "approved open standard" means. For example, we can say that in order for a standard to be considered approved it needs to be approved by a predetermined list of SDOs which we believe have the set practices in place to evaluate the method in order to elevate those methods that do achieve that higher bar with the "approved open standard" status.

rxgrant commented 2 years ago

@kdenhartog

I'm hesitant to say that a big tent approach like what's done for the MIME types registry is going to end up being what we need here when the bare minimum for interoperability of DIDs and DID Documents is far more involved. I think this is where the idea of having the standard developed through a standards org is going to be an important factor here because that's the step where rigor can be applied without placing the burden on the editors here.

Based on the uncertainty regarding which conflicting TAG/EWP items excuse formal objections in this standards org, I am certain that no standards org requirement for any star/badge/level is appropriate when dealing with decentralized protocols that disrupt traditional institutions. (Proof-of-work has become a powerful shibboleth.) The value-stack merge-conflict implications of DID Methods are too great for Internet engineers to wield their votes objectively.

also allow ourselves to lean on standards bodies with good processes in place to define what an "approved open standard" means.

No. For the reasons given above.

I further believe that if you did force this requirement, it would move the fight to creating standards organizations that do whatever it takes to get approved by any critera listed here, but either disallow any criteria in their voting that could be a shibboleth, or carefully prevent infiltration by individuals who respond in an oppositional way to the shibboleth. All you would cause is delay and cost in hacking the process to obsolete the political aspects. It would be better for marketplace fit to sort the technologies.

kdenhartog commented 2 years ago

Edit: the spam message has been removed now - I didn't intend for this to be a removal of @rxgrant message which is informative and on topic.

This comment above mine reads like spam that seems unrelated to the discussion. @iherman am I allowed to just delete it (I have the permissions to do this)?

kdenhartog commented 2 years ago

@kdenhartog

I'm hesitant to say that a big tent approach like what's done for the MIME types registry is going to end up being what we need here when the bare minimum for interoperability of DIDs and DID Documents is far more involved. I think this is where the idea of having the standard developed through a standards org is going to be an important factor here because that's the step where rigor can be applied without placing the burden on the editors here.

Based on the uncertainty regarding which conflicting TAG/EWP items excuse formal objections in this standards org, I am certain that no standards org requirement for any star/badge/level is appropriate when dealing with decentralized protocols that disrupt traditional institutions. (Proof-of-work has become a powerful shibboleth.) The value-stack merge-conflict implications of DID Methods are too great for Internet engineers to wield their votes objectively.

This seems a bit of strong allergic reaction because of the current problems we're facing. While this may be true in an SDO like W3C I can't say that we'd encounter the same issue in IETF and if we wanted to consider something like DIF an SDO (I don't believe this opinion is shared by all within the community) which is far more friendly to the work being done by us in this space. Point being here is that as long as we're transparent about the SDOs we believe are acceptable to prevent rug pulling on a controversial did method, I think we can circumvent the concerns you raise while still maintaining the high level of rigor that's expected from a well baked standard.

also allow ourselves to lean on standards bodies with good processes in place to define what an "approved open standard" means.

No. For the reasons given above.

I further believe that if you did force this requirement, it would move the fight to creating standards organizations that do whatever it takes to get approved by any critera listed here, but either disallow any criteria in their voting that could be a shibboleth, or carefully prevent infiltration by individuals who respond in an oppositional way to the shibboleth. All you would cause is delay and cost in hacking the process to obsolete the political aspects. It would be better for marketplace fit to sort the technologies.

I'm a bit less concerned about this. While I expect there to be some political maneuvering to occur I don't think it will be long standing and I generally believe that the issues that get raised during these conversations should be considered legitimate and useful to the development of the technology. If this did become a legitimate concern that hurts the legitimacy of any particular did method I think it would then be worth evaluating the effects that our process has set and considering modifying them to mitigate these concerns.

The issue I take with the "let the marketplace" philosophy is that for the most part it hasn't been effective over the years that the marketplace has been working with DIDs. Instead what I've more commonly seen is that the did methods that get chosen are not chosen based on their technical merits but rather on their marketing and the gaps get filled via tribal knowledge. Take for example did:sov, a method that has been around for a long time. It's been very successful in garnering adoption of the method by way of promoting a particular implementation (indy-sdk) which gets reused for the majority of implementations which are either producing, consuming, or resolving DID Documents from an Indy ledger. There's been legitimate and useful effort to build libraries which help to circumvent this as well as other concerns, but for the most part if you want to use did:sov you're left to a few libraries to achieve this since there's a fair amount of tribal knowledge that's necessary in order to implement this method.

That community has made great strides to place a greater emphasis on a standard rather than a particular implementation by starting work on did:indy which goes leaps and bounds beyond the current state of where things were a few years ago. That's useful in the legitimacy of the method and it shouldn't be understated that it's been useful, but I don't believe it was necessary for the marketplace to select that method since there was a good enough implementation available to make it work.

So why is this a concern? The reason I'm raising this is because building implementations on one or a few implementations which were built will get methods over the adoption barrier, but I don't believe the end state of what makes a good method should be just adoption. I believe that in order to build a robust method a well documented specification is necessary so that new implementors can also work with the method.

In a bit more dystopian what-if scenario too, I could see the day where a wildly successful method which was deployed by a large corporation over night in order to achieve that success could be abused to lock in licensing fees for did resolvers for example. To play this scenario out a bit, I could see that did:example is deployed to a billion users overnight and the user isn't even aware they're using DIDs and this method now becomes the most used method. Then since this method is built on a single implementation and deployed by a single corporation every implementer in the ecosystem realizes that in order for them to resolve the did document they are expected to use a library that was authored by the corporation who's patented the method and expects any developer who wishes to use the library to agree to their license in order to do so and collect royalty fees for doing so.

Now I'd hope that there's a push back by many people to choose not to support that method, but inevitably some will and this whole concern could have been avoided by us choosing to say good methods require a standard not just adoption. Scenarios like that are the reason why I'm advocating to see a standards based approach to this problem rather than a market based approach. I think with a market based approach we're likely to end up making much of the work here irrelevant even though it's well designed, robust technology because the market sided with the method that was well marketed not the one that solved the legitimate concerns of users.

talltree commented 2 years ago

@kdenhartog Isn't a "standards-based approach" a subset of a "market-based approach"? In other words, nowadays most standards only happen if there's enough market demand to see them all the way through the process.

From a practical standpoint, don't we have to treat a market-based approach as the baseline—because with DID 1.0 as an open standard, there's nothing we can do to prevent it.

So IMHO the only goal of the DID method registry is to surface as much helpful information as we can about DID methods that choose to be registered and which meet our baseline registration criteria.

iherman commented 2 years ago

This comment above mine reads like spam that seems unrelated to the discussion. @iherman am I allowed to just delete it (I have the permissions to do this)?

I believe we should consider comment threads the same way as we handle email threads at W3C in this respect. The overall policy for those is to be extremely reluctant of removing anything from the archives (barring very exceptional cases); the same should be true here imho.

iherman commented 2 years ago

I meant 112 JSON documents in a single subdirectory labeled "didMethods" or something like that. :)

There is no build process w/ ReSpec, but someone will have to extend ReSpec to pull all 112 files in at page load time and translate that to HTML (which is what ReSpec does in realtime). Exceedingly bad examples on how to do that here:

I have done something similar in the EPUB testing repository: https://github.com/w3c/epub-tests/. The EPUB tests, as well as the implementation reports, are submitted in JSON. I have created a TypeScript process that gathers all the information and generates a bunch of HTML tables which are then imported by a respec skeleton. I then defined a github action to run that script whenever there is a change. It is doable.

(B.t.w., I actually run respec from the action script, too, because the processing of respec, involving lots of and large tables, may be a bit slow when done run-time. But that is a detail.)