Open dosumis opened 1 year ago
For this to work, we need pre-reasoning to happen before the SPARQL. This is not currently the case. SPARQL runs on the triplestore.
It would work if we ran SPARQL construct using ROBOT after this stage: https://github.com/VirtualFlyBrain/vfb-pipeline-dumps/blob/vfb_pipeline/dumps.Makefile#L59]. I think this means that we need to either move all construct to work with derived files + ROBOT, or have just some of SPARQL run post-reasoning. These will need a different name scheme (right now, everything that starts with construct_ runs on the triple store.)
Also need to check whether product of construct_all.owl ends up in both Neo & SOLR (looks like it goes into SOLR via obographs.json goal.
CC @hkir-dev
Neo4j2owl is processing these labels as well: https://github.com/VirtualFlyBrain/neo4j2owl/blob/migrate_neo4j_hk/src/main/java/ebi/spot/neo4j2owl/importer/N2OOntologyLoader.java#L381
I can start the implementation of ROBOT based construct files processing. I don't think it will cause any performance issues.
Ps: We should be careful about memory usage since ROBOT query is using jena afaik and reason is using owlapi. If we combine these functions there is a risk to use 2X memory. But it won't be a problem if we use separate processes to run these functions.
New step should run after reasoning but before merge.
Requires pipeline modification specified here: https://github.com/VirtualFlyBrain/vfb-pipeline-dumps/issues/48
Use case - driving query display