Closed lewismc closed 4 years ago
More information on FaCT++ can be found at http://cisc.gmu.edu/education/High_school_internship/kevin/pdf/fact++.pdf
Perhaps I'm not quite understanding the question but I suppose you are not talking about necessarily materializing the inferred information in the triple store, right? Or is this about incorporating the FaCT++ reasoner itself as part of the inferencing capabilities on the triple store? Or a combination of all the above? (Here's a ref that may be relevant: https://franz.com/agraph/support/documentation/current/materializer.html)
Hi @carueda I've re-worded some things above, please let me know if this clarifies.
My opinion is that if a reasoner can create these inferences and they look correct (not inferring some impossibility) then we should let the reasoner do so. That's the whole purpose of reasoners!
I do wonder which reasoners are no longer being supported? ELK is heavily used by the OBO crowd because it is lightweight and handles major errors; but they use something else when an ontology is released. I forget which other reasoner; but it is one of the ones mentioned (though which one escapes me).
@rduerr thanks
...then we should let the reasoner do so.
You mean we should incorporate the inferences? I agree with you.
I do wonder which reasoners are no longer being supported?
I was surprised to find so many reasoner issues (NOT reasoner inferences) when using Protege master branch to run the reasoning experiments over SWEET master branch. This has, in a large part, got to do with my lack of knowledge regarding which version of OWL, which description logics if any and which extensions SWEET uses.
The reasoners I tried were
org.mindswap.pellet.exceptions.InternalReasonerException: Object Property hasUpperBound is used with a hasValue restriction where the value is a literal: "0"^^integer
The above got me thinking that producing a confusion matrix may be a good way for us to effectively capture the spectrum of reasoning one can do over SWEET. I think this would also be a suitable aid to the publication we are working on!
Very interesting exercise.
I do not think I would by default include the inferenced triples in the core SWEET, but I'd need to look at the list. The reason is twofold: (1) the power of the ontological model is not enhanced for the average user—many of the inferences will either be trivially understood, or not particularly important, and anyone who cares about using the more 'interesting' triples will already be using a reasoner on the ontology (perhaps a different reasoner, that may infer different and more useful, differently useful, or even contradictory triples). (2) Maintaining the resulting triples would be a bear. Remove a statement, and which triples do you remove?
Even if you keep all the inferred triples separate (recommended, even thought it reduces any value of having them in their 'source' files) you have to re-run the reasoner every time. In this context it might be an 'interesting' side product that people would inspect and use for learning, but that's what I'd call it and label it as: a side product, not directly a part of SWEET.
I would make an exception for triples that are important in their own right, in other words that we would want them in the SWEET ontology natively even if the statements that produced them were removed, because they are "greater truths". These might form their own cross-ontology set of general statements about the SWEET domain and its interactions.
I recommend sending the reasoner question/observations to the Protege support list. They have many smart ontology maintainers on that list who will have knowledgeable opinions about the reasoners.
Good comments and suggestions everyone.
Here's one more comment, with the caveat that I haven't done that much of semantic work lately. Please correct me if any of this is inaccurate or confusing.
In general:
"then we should let the reasoner do so" - yes. In general, we would just let the reasoner do its job as part of processing requests or queries against the semantic info, be that on a particular ontology or set of ontologies (using a tool like Protege), or as part of a SPARQL endpoint against a complete triple store.
But there are good reasons to materialize some of the inferred information from the reasoner(s):
Why not try running robot reason and compare outputs?
I definitely think it is Always a good idea to run this as a check at least
I suspect there will be none/few as there are few/no logical defs?
You could also set up a pipeline that leverages envo axioms
On Tue, Oct 22, 2019, 22:54 John Graybeal notifications@github.com wrote:
Very interesting exercise.
I do not think I would by default include the inferenced triples in the core SWEET, but I'd need to look at the list. The reason is twofold: (1) the power of the ontological model is not enhanced for the average user—many of the inferences will either be trivially understood, or not particularly important, and anyone who cares about using the more 'interesting' triples will already be using a reasoner on the ontology (perhaps a different reasoner, that may infer different and more useful, differently useful, or even contradictory triples). (2) Maintaining the resulting triples would be a bear. Remove a statement, and which triples do you remove?
Even if you keep all the inferred triples separate (recommended, even thought it reduces any value of having them in their 'source' files) you have to re-run the reasoner every time. In this context it might be an 'interesting' side product that people would inspect and use for learning, but that's what I'd call it and label it as: a side product, not directly a part of SWEET.
I would make an exception for triples that are important in their own right, in other words that we would want them in the SWEET ontology natively even if the statements that produced them were removed, because they are "greater truths". These might form their own cross-ontology set of general statements about the SWEET domain and its interactions.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ESIPFed/sweet/issues/164?email_source=notifications&email_token=AAAMMOM5VWTBNTLJ6NWRRWDQP7RKXA5CNFSM4JD2KJ6KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECAE4UI#issuecomment-545279569, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAMMOI53ZME32NKS6Q5SVTQP7RKXANCNFSM4JD2KJ6A .
@jgraybeal I’ll do exactly that.
@cmungall what is robot reason and compare?
Excellent points @carueda
Robot is an OBO Tool (ROBOT, get it?) for working on ontologies via the command line. It's pretty powerfool and cool. http://robot.obolibrary.org/
Powerfool, love it!
I also have done some blog posts https://douroucouli.wordpress.com/category/software/robot/
The paper on the robot site its also a great start
On Thu, Oct 24, 2019, 12:11 John Graybeal notifications@github.com wrote:
Robot is an OBO Tool (ROBOT, get it?) for working on ontologies via the command line. It's pretty powerfool and cool. http://robot.obolibrary.org/
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ESIPFed/sweet/issues/164?email_source=notifications&email_token=AAAMMONVKQOYCVC225B572TQQHXNNA5CNFSM4JD2KJ6KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECGDWDY#issuecomment-546061071, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAMMOLITZ4ZB3FRVJ4Q7MTQQHXNNANCNFSM4JD2KJ6A .
oh golly…
I ran the FaCT++ reasoner on the 163 branch and the results can be seen at https://drive.google.com/file/d/1RJ4gUpKGBshiSF-HRBmDI7JCJvo3T3ue/view?usp=sharing As expected they are numerous and widespread. Any comments on this folks?
It's not clear from that file which axioms are new and which are asserted.
You could try using the -a option on robot reason to annotate inferred axioms
Or -n to make a new ontology from new inferences should also work
On Fri, Nov 8, 2019 at 1:46 PM Lewis John McGibbney < notifications@github.com> wrote:
I ran the FaCT++ reasoner on the 163 branch and the results can be seen at
https://drive.google.com/file/d/1RJ4gUpKGBshiSF-HRBmDI7JCJvo3T3ue/view?usp=sharing As expected they are numerous and widespread. Any comments on this folks?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ESIPFed/sweet/issues/164?email_source=notifications&email_token=AAAMMOJLW6Y6HIZSOJEPU4LQSXM35A5CNFSM4JD2KJ6KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDTN7OA#issuecomment-552001464, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAMMOMHLXWZQ2O6U73FI23QSXM35ANCNFSM4JD2KJ6A .
I agree @cmungall it is unclear. Here's the results of robot
robot reason --create-new-ontology true --reasoner ELK --input sweetAll.ttl --annotate-inferred-axioms true --output robotInferred.ttl
2019-11-08 16:45:40,372 ERROR org.semanticweb.owlapi.rdf.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error1 for type Class
2019-11-08 16:45:40,382 ERROR org.semanticweb.owlapi.rdf.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error2 for type Class
2019-11-08 16:45:40,382 ERROR org.semanticweb.owlapi.rdf.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error3 for type Class
2019-11-08 16:45:40,382 ERROR org.semanticweb.owlapi.rdf.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error4 for type Class
2019-11-08 16:45:40,382 ERROR org.semanticweb.owlapi.rdf.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error5 for type Class
2019-11-08 16:45:40,382 ERROR org.semanticweb.owlapi.rdf.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error6 for type Class
2019-11-08 16:45:40,383 ERROR org.semanticweb.owlapi.rdf.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error7 for type Class
2019-11-08 16:45:58,251 ERROR org.semanticweb.owlapi.rdf.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error8 for type Class
2019-11-08 16:45:58,251 ERROR org.semanticweb.owlapi.rdf.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error9 for type Class
2019-11-08 16:45:58,252 ERROR org.semanticweb.owlapi.rdf.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error10 for type Class
2019-11-08 16:45:58,252 ERROR org.semanticweb.owlapi.rdf.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error11 for type Class
2019-11-08 16:46:23,735 ERROR org.obolibrary.robot.ReasonOperation - Reference violations found: 320 - reasoning may be incomplete
2019-11-08 16:46:23,738 ERROR org.obolibrary.robot.ReasonOperation - Reference violation: InvalidReferenceViolation [axiom=SubClassOf(<http://sweetontology.net/stateTimeCycle/Interglacial> ObjectSomeValuesFrom(<http://sweetontology.net/relaMath/minimumOf> <http://sweetontology.net/phenCycle/IceAgeCycle>)), referencedObject=<http://sweetontology.net/relaMath/minimumOf>, category=DANGLING]
2019-11-08 16:46:23,739 ERROR org.obolibrary.robot.ReasonOperation - Reference violation: InvalidReferenceViolation [axiom=SubClassOf(<http://sweetontology.net/stateDataProcessing/Validated> ObjectSomeValuesFrom(<http://sweetontology.net/relaProvenance/hadProcess> <http://sweetontology.net/reprDataServiceValidation/Validation>)), referencedObject=<http://sweetontology.net/relaProvenance/hadProcess>, category=DANGLING]
2019-11-08 16:46:23,739 ERROR org.obolibrary.robot.ReasonOperation - Reference violation: InvalidReferenceViolation [axiom=SubClassOf(<http://sweetontology.net/reprMathOperation/Integral> <http://org.semanticweb.owlapi/error#Error10>), referencedObject=<http://org.semanticweb.owlapi/error#Error10>, category=DANGLING]
2019-11-08 16:46:23,739 ERROR org.obolibrary.robot.ReasonOperation - Reference violation: InvalidReferenceViolation [axiom=SubClassOf(<http://sweetontology.net/realmAtmo/Mesosphere> ObjectHasValue(<http://sweetontology.net/relaPhysical/dTdh> <http://sweetontology.net/propTemperatureGradient/NegativeSlope>)), referencedObject=<http://sweetontology.net/relaPhysical/dTdh>, category=DANGLING]
2019-11-08 16:46:23,739 ERROR org.obolibrary.robot.ReasonOperation - Reference violation: InvalidReferenceViolation [axiom=SubClassOf(<http://sweetontology.net/phenStar/StellarPhenomena> <http://sweetontology.net/phen/Phenomena>), referencedObject=<http://sweetontology.net/phen/Phenomena>, category=DANGLING]
2019-11-08 16:46:23,739 ERROR org.obolibrary.robot.ReasonOperation - Reference violation: InvalidReferenceViolation [axiom=SubClassOf(<http://sweetontology.net/humanCommerce/ManagementSystem> <http://sweetontology.net/human/HumanActivity>), referencedObject=<http://sweetontology.net/human/HumanActivity>, category=DANGLING]
2019-11-08 16:46:23,739 ERROR org.obolibrary.robot.ReasonOperation - Reference violation: InvalidReferenceViolation [axiom=Declaration(ObjectProperty(<http://sweetontology.net/relaPhysical/dTdh>)), referencedObject=<http://sweetontology.net/relaPhysical/dTdh>, category=DANGLING]
2019-11-08 16:46:23,739 ERROR org.obolibrary.robot.ReasonOperation - Reference violation: InvalidReferenceViolation [axiom=SubClassOf(<http://sweetontology.net/human/HumanCapital> <http://sweetontology.net/human/HumanActivity>), referencedObject=<http://sweetontology.net/human/HumanActivity>, category=DANGLING]
2019-11-08 16:46:23,739 ERROR org.obolibrary.robot.ReasonOperation - Reference violation: InvalidReferenceViolation [axiom=SubClassOf(<http://sweetontology.net/reprSciProvenance/Metadata> <http://sweetontology.net/repr/Representation>), referencedObject=<http://sweetontology.net/repr/Representation>, category=DANGLING]
2019-11-08 16:46:23,739 ERROR org.obolibrary.robot.ReasonOperation - Reference violation: InvalidReferenceViolation [axiom=SubClassOf(<http://sweetontology.net/reprMathOperation/Derivative> ObjectAllValuesFrom(<http://sweetontology.net/relaMath/derivativeWithRespectTo> <http://sweetontology.net/reprMath/Variable>)), referencedObject=<http://sweetontology.net/relaMath/derivativeWithRespectTo>, category=DANGLING]
2019-11-08 16:46:23,739 ERROR org.obolibrary.robot.ReasonOperation - Reference violation: InvalidReferenceViolation [axiom=SubClassOf(<http://sweetontology.net/matrMineral/Pyroxine> <http://sweetontology.net/matrMineral/Mineral>), referencedObject=<http://sweetontology.net/matrMineral/Pyroxine>, category=DEPRECATED]
... and the resulting new ontology.
Hmm, it's actually not super-intuitive how to show only new inferred axioms. I have requested this https://github.com/ontodev/robot/issues/588
This is how I'm doing it for now:
robot reason --create-new-ontology-with-annotations true --exclude-tautologies structural -i sweetAll.ttl -r elk -o sweetAll.ttl
Then scanning the file for any axiom annotated with is_inferred
In fact there are no new inferences.
This is not unexpected, as sweet doesn't include logical definitions
But at least there are no unsatsifiable classes
This sounds positive however I somehow find it hard to believe that nothing can be inferred. It just doesn't seem right. We've not changed SWEET semantics that much since bringing it over to ESIP and I honestly don't think much work went into it at JPL after Raskin passed away... so I am struggling to grasp that there are no new inferences at all. Another way of saying that it is that as they currently stand the SWEET semantics are flawless and that Raskin got all right first time around.
I don't have that much confidence... I just don't buy it. I need to study this more.
Just to be clear, there are no new direct entailments. So for example, while there are many cases of "A SubClassOf B, B SubClassOf C => A SubClassOf C" these are not reported.
No new inferences is not surprising for an ontology that is the "shape" of SWEET, either in earlier versions or the current version. This is because SWEET is quite SKOS-like, even though it uses OWL, it doesn't utilize many features of OWL.
Some situations where we might expect new direct SubClassOf inferences:
While the fact that there are no incoherencies revealed, this is also not surprising as there is not much in the way of constraint-type axioms like disjointness.
So for example, while there are many cases of "A SubClassOf B, B SubClassOf C => A SubClassOf C" these are not reported.
I was seeing lots of these in the inferences generated by FaCT++, for example
### http://sweetontology.net/realmAtmoBoundaryLayer/WellMixedLayer
<http://sweetontology.net/realmAtmoBoundaryLayer/WellMixedLayer> rdfs:subClassOf <http://sweetontology.net/realmAtmoBoundaryLayer/AtmosphericBoundaryLayer> ,
<http://sweetontology.net/realmAtmoBoundaryLayer/BoundaryLayer> ,
<http://sweetontology.net/realmAtmoBoundaryLayer/PlanetaryBoundaryLayer> .
[ rdf:type owl:Axiom ;
owl:annotatedSource <http://sweetontology.net/realmAtmoBoundaryLayer/WellMixedLayer> ;
owl:annotatedProperty rdfs:subClassOf ;
owl:annotatedTarget <http://sweetontology.net/realmAtmoBoundaryLayer/AtmosphericBoundaryLayer> ;
<http://www.geneontology.org/formats/oboInOwl#is_inferred> "true"^^xsd:string
] .
[ rdf:type owl:Axiom ;
owl:annotatedSource <http://sweetontology.net/realmAtmoBoundaryLayer/WellMixedLayer> ;
owl:annotatedProperty rdfs:subClassOf ;
owl:annotatedTarget <http://sweetontology.net/realmAtmoBoundaryLayer/BoundaryLayer> ;
<http://www.geneontology.org/formats/oboInOwl#is_inferred> "true"^^xsd:string
] .
[ rdf:type owl:Axiom ;
owl:annotatedSource <http://sweetontology.net/realmAtmoBoundaryLayer/WellMixedLayer> ;
owl:annotatedProperty rdfs:subClassOf ;
owl:annotatedTarget <http://sweetontology.net/realmAtmoBoundaryLayer/PlanetaryBoundaryLayer> ;
<http://www.geneontology.org/formats/oboInOwl#is_inferred> "true"^^xsd:string
] .
Where in the example above, we already knew that soreaabl:WellMixedLayer> rdfs:subClassOf soreaabl:AtmosphericBoundaryLayer
but not immediately that it was inherently therefore also rdfs:subClassOf soreaabl:BoundaryLayer, soreaabl:PlanetaryBoundaryLayer
. Do you have an opinion on whether we should aim to add these explicit relationships?
Additionally, this has surfaced an ugly problem from the past. Did you notice the following Classes which are generated by OWLAPI. This is shown in the tool output. We even have a FAQ for this.
### http://org.semanticweb.owlapi/error#Error1
<http://org.semanticweb.owlapi/error#Error1> rdfs:subClassOf owl:Thing .
[ rdf:type owl:Axiom ;
owl:annotatedSource <http://org.semanticweb.owlapi/error#Error1> ;
owl:annotatedProperty rdfs:subClassOf ;
owl:annotatedTarget owl:Thing ;
<http://www.geneontology.org/formats/oboInOwl#is_inferred> "true"^^xsd:string
] .
I would not materialize indirect inferred subClassOf axioms in any of the main release files. It may be fine to release a file sweet-saturated-inferences.ttl or similar to make it easier for people to do fast lookups over subclass closures but not particularly necessary
Additionally, this has surfaced an ugly problem from the past.
Yes, I saw that. I thought it was a known issue, but it seems #84 was closed - prematurely?
... but not particularly necessary
OK. I think working on this ticket has been worthwhile... we've identified that many SubClassOf inferences are made and that we are making the decision not to explicitly capture them for the reasons given above. Do you have anything else to add right now @cmungall or can we close this issue off? Thanks for your inputs BTW, extremely helpful.
You can close, but some other tickets that may be spawned:
DONE - https://github.com/ESIPFed/sweet/wiki/SWEET-Release-HOWTO#run-robot-reasoning
DONE
Yes, we can track this through other existing tickets.
I ran the most recent [FaCT++ reasoner] over the entire suite today and as expected this produces many inferences which are not currently encoded in the ontology semantics.
For those that are interested in why I ran this particular reasoner, the answer is simple, it was the only one which executed without error. All others (pellet, Hermit, ELK) presented error statements which I do not particularly wish to fix as from preliminary investigation it looks like some of these reasoners are no longer being actively maintained.
The question at hand is do we want to incorporate reasoner inferences e.g. new inferred axioms, into SWEET files?
Any thoughts?