Closed nreinicke closed 2 years ago
Pull request open: https://github.com/NREL/mappymatch/pull/137
So this is interesting. I have not been able to get the dependencies to install on the test runner even with Conda which is why I was going with the docker approach.
On a separate note, I don't think this is how we want to run the CI. The installation is installing Mappymatch from pip to get the dependencies to run the tests. But what if we change the dependencies, then we will have to either not run the tests or change the GitHub action. I think we want it to install from a requirements.txt
if we change the dependencies, then we will have to either not run the tests or change the GitHub action.
ah, right, I suppose we could just revise the install to be pip install .
so that it installs from source rather than pulling from PyPI
@machallboyd Just a heads up. The requirements text should not lock versions. It will cause conflict with the different python versions. It should look like similar to our environment.yml. You can specify minimum versions if it helps the install go better.
@Rowlando13 For something like this, wouldn't we want to explicitly pin the latest version of the dependencies supported by Python 3.8? It seems like the generally accepted practice is to explicitly pin versions in requirements.txt
, as opposed to the open-ended setup.py
.
You could pin the dependencies, but it's likely the 3.9 and 3.10 builds would fail due to incompatible packages. I am assuming we still want 3.9 and 3.10 in our CI and that you were not planning on making 3 requirements.txt files. Also the default way to configure the GitHub task runners installs the latest 3.8.x which could also make the build fail unexpectedly. I assume there is a way to lock the Python version but I don't know what it is. In any case, we would have manually roll the requirements.txt periodically to make sure we running the tests on the latest versions.
Generally, you want to pin dependencies for applications to minimize error, but you don't want to for packages so that they can be widely installed. I think this falls in somewhat of a gray area since it is for our tests. But I think not pinning them will save us a lot of repetitive work. Thoughts @nreinicke ?
Seems like it's working ok so far. What's the expected point of failure? https://github.com/NREL/mappymatch/pull/139
You could pin the dependencies, but it's likely the 3.9 and 3.10 builds would fail due to incompatible packages. I am assuming we still want 3.9 and 3.10 in our CI and that you were not planning on making 3 requirements.txt files. Also the default way to configure the GitHub task runners installs the latest 3.8.x which could also make the build fail unexpectedly. I assume there is a way to lock the Python version but I don't know what it is. In any case, we would have manually roll the requirements.txt periodically to make sure we running the tests on the latest versions.
I'd argue that we want a stable test environment with explicit PRs for dependency updates in that environment.
For instance, say I check out a branch, make some changes, and submit a PR. Unbeknownst to me, a dependency has silently updated and is being installed to the test environment, and it breaks mappymatch. Certainly this is valuable to know, but because it's my PR that's failing, I spend all my time trying to track down the bug in my own code, rather than realizing it's actually the test environment that's failed.
Alternately, we could set it up so running pip-compile
updates the dependencies in the requirements.txt, and check that in as a separate PR occasionally. If that breaks, we know a dependency is breaking mappymatch, and not some of our code. (This is basically where it is now but I want to build in support for the dev requirements instead of installing them at the CI level.)
As @Rowlando13 says, however, this is more manual work. We could get the best of all worlds by supporting dependabot and having it generate dependency update PRs for us.
Generally, you want to pin dependencies for applications to minimize error, but you don't want to for packages so that they can be widely installed. I think this falls in somewhat of a gray area since it is for our tests. But I think not pinning them will save us a lot of repetitive work. Thoughts @nreinicke ?
This is all good discussion. Adding my own two cents:
I would definitely prefer to not pin any of the dependencies (unless we're pinning lower bounds or excluding a known broken package).
I spend all my time trying to track down the bug in my own code, rather than realizing it's actually the test environment that's failed.
This is a good point but I'm struggling to think of a scenario where this wouldn't be obvious. If our tests are passing locally but failing in our CI runner than we're likely dealing with a dependency issue.
this is more manual work. We could get the best of all worlds by supporting dependabot and having it generate dependency update PRs for us.
I'm not totally familiar with dependabot but this could be a good solution in the long term for dealing with dependencies. I'm hesitant to sign us up for committing to manually updating a set of pinned dependencies
Also, I thought this SO answer was a good summary of pinning vs not pinning dependencies
For that SO answer, I think that mappymatch's dependencies are for these purposes already unpinned. pip install mappymatch
(or pip install .
for potential test environment setup purposes) uses setup.py
, where the dependencies are unpinned. A requirements.txt
file is more useful for defining a stable environment to run in (such as for a production website) or a stable test environment.
I have a proposal for using requirements files to simplify our contributor and test setup processes, but I think it may be faster to just open a PR in order to demonstrate, if you'll give me a bit here...
That took longer than I thought but the PR is here: https://github.com/NREL/mappymatch/pull/140
Right now we run linting, formating and type checking as part of our CI checks. We should add a step to run the test suite and make sure it passes before we merge any PR. We looked into this briefly and ran into some hurdles with respect to getting the environment setup. Continue looking into this and get the tests setup in the workflow.