Occasionally, during GOlr loads on Jenkins, there will be periods of time where the upstream ontologies timeout or otherwise fail due to somewhat mysterious circumstances. Manually probing after such fails almost always seems to indicate that things are fine. We are unsure of the exact reason, or if there is only a single cause for these various fails.
To better protect ourselves, and to give better speed (always nice) we will look at creating an ontology catalog on the fly that we can check and use. Ideally, it looks something like:
## TODO: Purge previous catalog generation runs
## TODO: Generate list of ontology files
## Using the above step, loop through (or similar):
owltools AN_ONTOLOGYFILE.owl --slurp-import-closure -d mirror -c mycat.xml
## TODO: Then run check and load commands, using the specified catalog, something like:
owltools --catalog-xml mycat.xml THE_REST_OF_THE_COMMANDS_1
owltools --catalog-xml mycat.xml THE_REST_OF_THE_COMMANDS_2
As we are still loading on Jenkins, this should be rolled out to the 2.4.x and master branches, and then triggered in the Jenkins jobs.
Occasionally, during GOlr loads on Jenkins, there will be periods of time where the upstream ontologies timeout or otherwise fail due to somewhat mysterious circumstances. Manually probing after such fails almost always seems to indicate that things are fine. We are unsure of the exact reason, or if there is only a single cause for these various fails.
To better protect ourselves, and to give better speed (always nice) we will look at creating an ontology catalog on the fly that we can check and use. Ideally, it looks something like:
As we are still loading on Jenkins, this should be rolled out to the
2.4.x
andmaster
branches, and then triggered in the Jenkins jobs.Tagging @cmungall
We've noticed this periodically, but the immediate urgency here if from https://github.com/geneontology/noctua/issues/511