Closed kltm closed 2 years ago
release
runs often take under an hour; master
runs often take more than an hour, and there is an exceptional 2.5hr one there. I'm not sure what the target frequency would be. "hourly" seems like maybe a bit much. A few times a day?
If we do this with our current Jenkins pipeline setup, it would be fairly straightforward to create a new standing pipeline; or, as we haven't been using it otherwise, https://github.com/geneontology/pipeline#go-ontology-dev
Trial on go-ontology-dev.
@balhoff @cmungall @dustine32 As an experiment for how useful this actually is (I'm having some doubts now), I've setup an ontology build to go every 3 hours, with you three receiving an email if it changes (success->fail or fail->success); I'm setup to also get continuing fail notices. I may look into having this pipeline failing block all of the release pipelines--that may be more useful.
This has been ticking over for almost a week. Closing for now.
Reopen for additional features: https://github.com/geneontology/go-ontology/issues/23431#issuecomment-1143747040
Hm. Maybe an S3 bucket for the latest failed report? The two hour gap window is a little awkward with the three hour cycle.
Should be okay now. Check bucket on next failure. Clearing.
Seems to work as expected now.
Reopening to try and add "deeper" analysis to catch errors that might not be apparent in build or examining editors file. E.g. https://github.com/geneontology/go-ontology/issues/23468 .
Added better targeting of messages depending on fail type and recover type.
Added clarity on where the reports are for recent failures.
Explore adding "continuous" ontology building to catch errors that cannot be filtered with GitHub Actions during normal ontology editorial.
For a recent example of this, a change in UBERON gets picked up live by the GO build and things go pear-shaped: https://github.com/geneontology/go-ontology/issues/23367 , which then in turn blocks normal pipeline operation and testing https://github.com/geneontology/pipeline/issues/286 .
Basically, since the full build currently cannot be run in GHA and is not completely dependent on our own ontology development, we would have a regular and higher-frequency ontology-only build that would then email key people like @balhoff and @cmungall when something goes wrong, giving us an earlier heads up about issues.