Closed cstoeckert closed 2 years ago
Can we rephrase this to be more concise and actionable
I suggest leaving our phrases like "Progress has been made towards standardization of requirements to become part of the OBOF" that are implicit
It is possible to still have an ontology submitter agree to everything and the ontology pass the Dashboard but the ontology not actually be logically consistent, scientifically accurate, or follow the principles as intended
Surely a logically inconsistent ontology will always be detected by the dashboard?
"We don’t want to admit ontologies" -- what does this mean? Consider ISO language SHOULD/MUST/etc
use of imported terms or creation of new ones are problematic (e.g., inappropriately place imported classes, create/apply object properties that don’t make sense).
"problematic" is very subjective. We have specific tickets for the examples given (inappropriately place imported classes / axiom injection), we should fast track these into concrete guidance
What does it mean for an object property to not make sense?
Such logical inconsistencies will prevent interoperability with other OBOF ontologies and cause confusion if others try to use it.
The examples given are not logical inconsistencies - this term has a very precise meaning for OWL and ontologies and we shouldn't confuse our users by mixing these.
For this and the other criteria there should be clear examples and counter-examples
I'm also skeptical of any criteria that can't be evaluated programmatically. There's only so much effort that people are willing to put into doing and then writing up a subject review of an ontology, which is again subject to their own experience. Additionally, we should be thinking about better ways of pushing the burden of ontology evaluation onto the submitters that can be evaluated in a structured way. For example, in https://github.com/OBOFoundry/OBOFoundry.github.io/issues/1819#issuecomment-1084581942, the ontology submitter probably didn't think very much about whether there was a more appropriate ontology for their efforts, but there's no record of this either way.
Some text and thoughts / text for consideration. I tried to toss in some of the points that have stuck with me over the past couple of years, but treat these as notes rather than firm positions. I think we should soon move the draft text to a GDoc or similar to make the review more fluid/easier to handle, porting it back here when it starts to stabilise.
I'm pre-supposing that we will have some review criteria and we won't be blindly inclusive.
For this and the other criteria there should be clear examples and counter-examples
We can add examples and counter-examples when the criteria / guidance is settled unless we need them sooner to clarify things internally.
Surely a logically inconsistent ontology will always be detected by the dashboard?
The dashboard wasn't calling any reasoning errors for DISDRIV https://github.com/OBOFoundry/OBOFoundry.github.io/issues/1508, as no axioms were there for the inconsistencies to be detected.
"problematic" is very subjective. We have specific tickets for the examples given (inappropriately place imported classes / axiom injection), we should fast track these into concrete guidance
A list of these would be helpful. I agree that these should be translated from experience into guidance, but we should all help compile the list.
The OBO Community has, historically, distinguished between "Library" and the more rigorously reviewed "Foundry" ontologies.
The OBO Library served as a collection of experimental, fledgling, highly specialized (i.e. designed for use by a very specific community or in a specific project), or similar resources which were of interest to the community, but not designed or usable as generic, reference ontologies for the wider community. Ontologies that were admitted into the Library still, however, attempted to align to the OBO Principles, especially in avoiding thematic/content overlap with existing ontologies in both the OBO Library and Foundry (in favor of reuse).
Foundry ontologies were those that had been manually reviewed by (typically senior) members of the OBO Community for more strict compliance to the OBO Principles and suitability as generic references usable across many projects and/or communities. More emphasis was placed on reusing content from other OBO ontologies and accommodating the requests of their user base.
The distinction between Library and Foundry ontologies was and is multifaceted, contributing to the difficulty of consistently maintaining this distinction. Thus, reviews and the decisions to admit an ontology into the Library or Foundry were done on a case by case basis, often in ways and/or with reasoning that were difficult to accurately document or communicate to the OBO Community at large. To improve consistency and clarity, the OBO Operations Committee has been discussing how to both document and refactor its review processes to be more transparent, reproducible, inclusive, and accurate.
In this document, we will develop a consensus on both the categorisation of ontologies in the OBO Community and the standard operating procedure we will use to evaluate proposed additions to the OBO space.
NB: "Ontological artifact" is used as we may and do have resources in OBO that are not ontologies in the strict sense (often for good reason), such as the NCBI Taxonomy and the NCI Thesaurus.
The typology of ontological artifacts noted below is an "unpacking" of the key elements that defined resources in the "Foundry" and "Library" categories. One or more of the types listed below should be applied to new and existing artifacts (e.g. as tags) to make their scope and intent clear. Reviews can then be more focused on evaluating what the artifact claims to be capable of and intended for, and users more aware of the nature of the resource they are using.
Artifact by semantic expressivity (assuming all OBO resources should follow the genus-differentia model: subclasses always inherit and never lose the attributes of their superclasses. This is key to ensuring cross-resource interoperability and importability).
[Add thesauri/taxonomies built on ontological principles?]
Artifact by mission
[ADD AS NEEDED}
Following successful admission, artifacts may be subject to re-review on request from their authors and/or users. The same criteria expressed above will be used. Re-review is recommended should the artifact go through any major changes or if its scope changes. This is especially true if the scope changes such that the artifact has domain overlap with another artifact in the OBO collection.
see also: #1140
In extension to the above, I want to reiterate my position on the matter:
I have been advocating lighter human reviews and a lower bar for admission, and much higher standards for ontologies already in the foundry to be considered reference with ongoing checks. Here is what I would like concretely:
COB compliance means two simple things:
Non-overlapping term scopes means two simple things:
Coherency means that an ontology must have no unsatisfiable classes when merged with all of its dependent OBO ontologies. A "dependent OBO ontology" is defined as "any ontology in OBO from which the depending ontology uses one or more terms".
Some details need to be hashed out, like ontologies with already overlapping scope (DO, Mondo, NCIT, OMIT), but this is immaterial - its just the principle.
Where are we at with this? Do we want to discuss this during the governance-related call (whenever we manage to schedule that)?
We agreed during the call that this ticket can be closed. Review criteria are now codified, but we still need to discuss "Foundry status". Someone will make a new ticket for that when there's a coherent proposal.
This issue is to provide a draft stating criteria for ontologies to meet in order to be added to the OBO Foundry (OBOF). The topic has been discussed many times during operations committee calls and a request to start documenting the criteria was made at the April 19, 2022 call.
Historical context: The OBOF has “reviewed” and library ontologies. The former are typically established reference ontologies and involved manual review for meeting OBOF principles. Library ontologies were admitted if committed to following OBOF principles and were not obviously in conflict with existing OBOF ontologies. The reviews and decisions to admit were done on a case by case basis. There have been cases of not admitting ontologies that were simply repeats of existing resources or not scientifically based. In the past couple years there has been a movement away from this approach motivated by a desire to get away from the reviewed versus library status and the introduction of the Dashboard to programmatically check meeting principles.
Current review practice: Progress has been made towards standardization of requirements to become part of the OBOF. An issue needs to created at https://github.com/OBOFoundry/OBOFoundry.github.io/issues and a detailed template (https://github.com/OBOFoundry/OBOFoundry.github.io/issues/new?assignees=&labels=new+ontology&template=new-ontology.yml&title=Request+for+new+ontology+%5BNAME%5D) needs to be satisfactorily filled out. The template includes a pre-registration checklist that essentially requires the submitter to agree to OBOF principles to check all the boxes. The process includes passing the provisional Dashboard (https://obofoundry.org/obo-nor.github.io/dashboard/index.html). It is possible to still have an ontology submitter agree to everything and the ontology pass the Dashboard but the ontology not actually be logically consistent, scientifically accurate, or follow the principles as intended. A review of some kind is made as part of discussion for admittance by the OBOF operations committee.
OBOF operations committee discussion points to be applied in future reviews:
We don’t want to admit ontologies whose use of imported terms or creation of new ones are problematic (e.g., inappropriately place imported classes, create/apply object properties that don’t make sense). Such logical inconsistencies will prevent interoperability with other OBOF ontologies and cause confusion if others try to use it.
We don’t want to admit ontologies that claim to cover a domain but whose contributions are problematic (e.g., don’t really cover the area claimed, make assertions that are inaccurate). Such ontologies will be scientifically untrustworthy for others to use and will make it hard for other ontologies to provide better coverage of the domain.
We don’t expect new ontologies to be perfect but we do expect them to be responsive to obvious or widespread problems. If the submitter makes good faith efforts to respond to identified problems then the ontology should be admitted.
A report from a reviewer could use these points to indicate whether there are wide-spread or glaring logical problems and/or serious coverage and accuracy problems. If there are such problems, then the ontology would need to demonstrate good faith effort through visible changes to the ontology before admittance.