Open matentzn opened 4 years ago
Rather more than I hoped. Can you summarise which these are? (readable diff?) and which ontologies were failing because of them?
I cant come up with a comprehensive list of ontologies (I only ever saw stuff breaking when many ontologies where merged together), but I removed in total 8 axioms:
!! EquivalentClasses(ObjectSomeValuesFrom(<has part> ObjectSomeValuesFrom(<bearer of> <mass>)) ObjectSomeValuesFrom(<bearer of> <mass>) )
EquivalentClasses(ObjectSomeValuesFrom(<bearer of> <mineralised>) ObjectSomeValuesFrom(<RO_0002473> <CHEBI_46662>) )
EquivalentClasses(ObjectSomeValuesFrom(<bearer of> <multi organismal process quality>) ObjectMinCardinality(2 <RO_0000057> <CARO_0001010>) )
EquivalentClasses(ObjectSomeValuesFrom(<bearer of> <single organismal process quality>) ObjectExactCardinality(1 <RO_0000057> <CARO_0001010>) )
!! SubClassOf(ObjectSomeValuesFrom(<bearer of> <process quality>) <BFO_0000003>)
!! SubClassOf(ObjectSomeValuesFrom(<bearer of> <physical object quality>) <BFO_0000040>)
SubClassOf(ObjectSomeValuesFrom(<bearer of> <cellular quality>) <CL_0000000>)
SubClassOf(ObjectSomeValuesFrom(<bearer of> <PCO_0000003>) <PCO_0000001>)
We can introduce them back, but I didn't discriminate much now to be honest, sorry; I am so far back on my uPheno schedule that I simply removed all GCIs or Equivalent Classes axioms in PATO without a genus - basically anything that would infer a type if a PATO quality is present. The only ones I know to have caused problems are the ones I flagged above with the double !!
. The PCO axiom I removed because it does not seem to have anything to do with PATO and the two terms keep coming up everywhere as ugly IRIs that are never populated by terms. You let me know how you want to play this!
You know I dont dispute the correctness of these other axioms; I just think that because people are people, I feel such strong axiomatisations will cause more problems then they solve in deployed ontologies. However, I think it would be very valuable to put them in a separate ontology that is used purely for validating and debugging ontologies. What do you think? I will add them back if you insist of course!
You know I don't dispute the correctness of these other axioms; I just think that because people are people
I first started adding GCIs to PATO because 'people are people' and sometimes use very inconsistent axiomatisation (e.g. sometimes using PATO qualities and sometimes using part relations) leading to highly incomplete ontologies: CL nuclear count classifications were so bad they made CL look ridiculous to any competent biologist looking at the relevant grouping terms.
Other GCIs are there for the constraint purposes: cellular qualities inhere in cells; physical qualities in physical entities etc. The constraint is triggered in combination with a disjointness axiom. I think it's worth reviewing whether we care about each of the constraints and, if we do whether they are better enforced with a QC warning rather than => inconsistency.
I also think that we need to be able to place these in the context of more traditional design pattern documentation.
It might be sufficient to have informal doc of a pattern with a PURL (or DOI) - used to tag all axioms using it. e.g.
-- START ---
Constraint: Only cells bear cellular properties.
GCI: bearer_of some cellular quality subClassOf cell
cell disjointWith multicellular anatomical structure
cell disjointWith acellular anatomical structure
...
Example of why this constraint is useful:
nuclear count qualities of cells would make not sense to a biologist if applied to non-cells e.g. we would only apply anucleate or binucleate to a cell, not to anything that has no nuclei or two nuclei respectively.
Related axioms bridge between between nuclear count qualities of cells and part relationships, e.g.
cell and bearer_of some anucleate EquivalentTo cell and not (has_part some nucleus)
---END---
I think this would be awesome! This would be almost like a unit test; if this could be tied to some cool reporting framework, like @balhoff markdown serialisations of explanations etc, this could very nicely complement shape constraint languages like shex and shacl with a more powerfull semantic test. I would see it like this: you document these kinds of test (exactly like your example) somewhere in a yaml file library and then build a robot based framework that checks these one by one by 1) first injecting the axioms of the constraint 2) then outputing any unsat explanations which are contain any of the axioms that make up the constraint 3) highlightling likely culprit axioms.
I could see this kind of stuff being developed along the side-lines of the OBOCORE initiative. OBOCORE compliance could then be tested two fold: 1) every class in my ontology inherits from an OBOCORE class and 2) all the 'unit tests' pass.
I had to remove some GCIs from PATO that are biologically correct, but because of the widespread misuse of PATO qualities, not feasible to be kept at the moment (causing too many downstream unsatisfiablities).
We should consider re-introducing them in some way as a means of debugging downstream ontologies that use PATO. @dosumis