Open tillahoffmann opened 1 year ago
I think Pylint is the only requirement I would personally consider locking, because it's
If something like mypy
updates and detects new type errors, we probably actually do want to know about those (to my knowledge this has basically never happened in our repo). Similarly for things like numpy
That seems fair.
I've been locking dependencies and then periodically updated the lock file. If the build for the commit that updates the lock file fails, I know that it is a dependency that is the underlying cause (rather than some other non-deterministic culprit). E.g., https://github.com/testcontainers/testcontainers-python/pull/274. But that may be overkill, and locking pylint
gets the job done.
Do you know if we could have that periodic update be automated (e.g. by something like dependabot)?
If we could have something which runs on the first of the month and updates that lock file it would be feasible to lock all the test dependencies, I think.
Yes, that should be possible with https://github.com/marketplace/actions/create-pull-request.
The CI pipeline currently installs requirements from
requirements.txt
,requirements-test.txt
, andrequirements-optional.txt
. These files list semantic requirements, e.g.,numpy
, but don't "lock" the specific version, e.g.,numpy==x.y.z
. During testing, this can lead to unexpected failures caused by changes in underlying packages rather than changes related to the code in this repository.For example, this run succeeded with
pylint==2.15.10
, but this run failed withpylint==2.16.2
. This was due to the change https://github.com/PyCQA/pylint/issues/7690 released with pylint 2.16.0.Keeping the requirements semantic for the actual package, but locking the requirements for CI for reproducible builds has worked well for me in the past. I'm happy to send a PR if this is of interest but wanted to check with you before making any changes.