Open wenzeslaus opened 2 months ago
What was useful for these Travis builds, apart from being from a long time on older OS, was that it was closer to vanilla OSes. Meaning that it was less influenced by the contents of the GitHub runner images.
So in that vein, a Docker build would probably fill the same spot, as we choose what we put inside, and start from a distro's image.
What's remaining, was a clang build. We have a clang build for macOS though.
We could maybe fill that void without a simple build, but by adding a clang/llvm analysis tool build, similar to our GCC standards check. I don't know if clang-tidy needs a build, or manages to not use the same architecture as when compiling, but it might be option. Does a build without any tests have value (like can we know that our program works if the build completes, but the build was silently unsuccessful)?
...less influenced by the contents of the GitHub runner images.
Good point. Thanks.
So in that vein, a Docker build would probably fill the same spot, as we choose what we put inside, and start from a distro's image.
We do build Docker images anyway, but not for PRs. Should we use one or more of the Dockerfiles for this?
We could maybe fill that void...by adding a clang/llvm analysis tool build...
A Clang analysis would be indeed nice, but I'm little afraid of a need for a lot of configuration.
Does a build without any tests have value (like can we know that our program works if the build completes, but the build was silently unsuccessful)?
I think knowing we build with certain compiler or language standard has a value. We don't run full test under assumption that the software works because the tests pass elsewhere. That's not idea, but a tradeoff to consume less CI time. Is that what you are thinking about?
Part of my questioning about "just" compiling might not be enough was that currently, the makefile build tries to get really far without failing when in CI context, we would like it to do so at the part where the failure is. Another part is that it is not 100% static C/C++, we have a part of shared libraries that are loaded in Python. For the C/C++ modules, during build they are invoked at least to generate some usage info, so there's some smoke test there, (ie if there's a segfault on start, we catch it). But do we have an equivalent to know that our Python code is able to load the compiled libraries, again, in a smoke test fashion?
If we'd want, a 1 min or 2 max test might be appropriate.
We do build Docker images anyway, but not for PRs. Should we use one or more of the Dockerfiles for this?
We'll need to at least build Docker images on changes to Dockerfiles at some point too.
I think we should really consider testing the Dockerfiles on every PR to help us simplify situation like #4343. Docker images, just like Windows and macOS builds are one of our distribution channels. Maybe one job with the images build in a sequence? Or anything else to avoid this need to go back to fix stuff. It would test the images and perhaps help with covering some of the cases covered by Travis.
I think we should really consider testing the Dockerfiles on every PR to help us simplify situation like #4343. Docker images, just like Windows and macOS builds are one of our distribution channels. Maybe one job with the images build in a sequence? Or anything else to avoid this need to go back to fix stuff. It would test the images and perhaps help with covering some of the cases covered by Travis.
--with-blas
and --with-lapack
.I think we should really consider testing the Dockerfiles on every PR to help us simplify situation like #4343. Docker images, just like Windows and macOS builds are one of our distribution channels. Maybe one job with the images build in a sequence? Or anything else to avoid this need to go back to fix stuff. It would test the images and perhaps help with covering some of the cases covered by Travis.
4343 and #4346 are the result of the fact that none of the runners (but Docker) are configured without
--with-blas
and--with-lapack
.
There may be less "expensive" solutions than running Docker on every PR.
with and without - that's sort of the idea for the minimal config variation in the Ubuntu builds.
I've always wanted to have a real "full" and real "min" configuration on Ubuntu. It makes sense to distinguish them more
Just a thought: would it be (reasonably) possible to trigger a docker run in a PR via something like [docker ci]
(in the spirit of [skip ci]
)?
I suppose something like that is possible. Label or comment would allow triggering it without actually doing commits.
For the normal case of running some checks/steps on some more advanced file filters than what we can do at the top of the workflow, see the advanced checks where pre-commit is checked when pre-commit config is changed.
Also note that from what I remember, the docker workflow isn't ready to run on PRs: images might be (tried) to be pushed to registry. It will need to be looked at before adding the PR trigger.
As OSGeo will be stopping the Travis subscription, we need to take an action and either remove Travis builds completely or extent the GitHub Action builds to include the system configurations from Travis. I think the Ubuntu version (only?) is what is unique there.
Travis still (Dec 2023) seems to have free plans for some open-source projects, but only for non-commercial open-source projects (Nov 2020), and it does not seem worth to me to try to navigate that.