Closed timothystewart6 closed 9 months ago
I think this is just a side effect of how slow the VMs are in GitHub.
@timothystewart6 I assume you are talking about the "Test" workflow?
I did just take a look and found that out of the last 25 runs of the workflow, 12 runs failed:
requirements.txt
:
So, let's look at the problems:
requirements.txt
: These are bound to re-occur in dependabot's merge requests as with a "flat" requirements.txt
file, we do not differentiate between direct dependencies (stuff that we directly use) and indirect dependencies (stuff that we do not directly use but our dependencies or other indirect dependencies) use. So if any package in our dependency tree has an upper version limit on any of its dependencies and dependabot updates it, we will get these failures. The solution to this is separating our direct dependencies from the "whole, flat" list of dependencies, e.g. using pip-compile
(which is also supported by dependabot).molecule.yml
schema in molecule 4.0.2: In the default
and ipv6
scenario's molecule.yml
, I have been using YAML anchors so that common properties of the nodes (like OS, hardware ressources) do not have to be repeated. The "anchored" nodes live in a key that ansible-lint
does not read out. Since molecule 4.0.2 has changed the schema validation implementation, these additional nodes are now no longer accepted. We can either:
@sleiner Thank you for the analysis! I am going to spin up some self-hosted runners to see if that eases the failures due to performance.
RE: requirement.txt
good to know! I will see if I can straighten this out. I am a fan of freezing the requirements however it might be causing too much churn having depdendabot looking at our child dependencies.
RE: molecule.yml
schema, good to know. That's too bad we can't use anchors but I am ok with repeating since it's something that shouldn't change too often.
RE:
requirement.txt
good to know! I will see if I can straighten this out. I am a fan of freezing the requirements however it might be causing too much churn having depdendabot looking at our child dependencies.
If we were not freezing dependencies right now, CI would be broken because of the schema changes in molecule 4.0.2, so I agree that this is sensible for this project. Just to make sure that we are on the same page: With pip-compile
, you would still freeze the dependencies, but separate the frozen environment (requirements.txt
) from your "known requirements" (requirements.in
: only direct dependencies, usually only lower bounds on version numbers, except when you're sure a newer version would break something). Through that separation, you get multiple advantages, like...
requirements.txt
- likewise if we do not need some indirect dependency anymore, it would currently not remove it from the list but keep on updating it).requirements.txt
(as seen in the original post): requirements.in
contains the actual constraints. When freezing requirements (thus creating requirements.txt
, everything else can be changed as needed (usually the latest available and compatible version), Also, if this was a "real Python project", one should probably use pyproject.toml
instead of requirements.txt
as well as a dependency manager that comes closer to the state of the art (in my experience, pdm is currently in the lead, other contestants are poetry and pipenv) - but for the scope of this project (we just need to install some Python tools), the really basic requirements.txt
approach with pip-compile
seems well-suited to me.
I think these can be addressed in separate PRs. Thank you again for your insight!
I decided not to pursue self-hosted runners, at least in Kubernetes and for now. https://github.com/techno-tim/k3s-ansible/pull/136
Expected Behavior
CI should not fail as often
Current Behavior
CI seems to fail quiet a bit and I think this is just a side effect of how slow the VMs are in GitHub.
I see a few possibilities here: