Closed mschoettle closed 2 months ago
@jheld @samamorgan I wanted to capture warnings as errors in pytest
so that deprecation notices are exposed as early as possible. I noticed in our project where I tried to switch to the new alpha release that even though the index_together
is fixed in the model (via #258) the old migration still adds it causing the "RemovedInDjango51" warning. I think the only way would be to change the old migration such that new users only get the new behaviour and existing users who already ran this migration on their project get the updates via the migration added in #258. Thoughts?
@samamorgan I noticed that even when the tests fail the CI job does not fail due to piping the output to tee
. I fixed it by explicitly setting the shell to bash
which sets set -eo pipefail
. See: https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#exit-codes-and-error-action-preference
Side question: What's the purpose of running ruff
as part of pytest
? Would this not be captured by the pre-commit
job already? Aside, I would favour a separate ruff
job instead of mixing it with pytest
.
On the migrations, I think that just means it's time to squash the migrations. That should get rid of the warning.
Pre-commit should always run as part of CI. A user can easily turn off pre-commit and commit code that does not follow linting rules. CI is the last defense against that. I don't think it should run separately, as that would increase test time since the test environment would have to be regenerated. I'm pretty sensitive to that sort of thing since increased CI time = increased cost in a private repo.
On the migrations, I think that just means it's time to squash the migrations. That should get rid of the warning.
How does migrate
behave when an existing project has applied the previous migrations (that were then squashed)?
Pre-commit should always run as part of CI. A user can easily turn off pre-commit and commit code that does not follow linting rules. CI is the last defense against that.
I agree with you that pre-commit
should run as part of CI for the reason you stated.
I don't think it should run separately, as that would increase test time since the test environment would have to be regenerated. I'm pretty sensitive to that sort of thing since increased CI time = increased cost in a private repo.
I am assuming this is in relation to my question about ruff
and pytest
? I understand your point. To me these are two separate concerns. One is linting, the other is tests. Since ruff
and ruff-format
are run as part of pre-commit
job it is redundant to run it again via pytest
.
How does
migrate
behave when an existing project has applied the previous migrations (that were then squashed)?
https://docs.djangoproject.com/en/5.0/topics/migrations/#migration-squashing
What I believe will happen: For those who have already installed this package and run migrations, the new migration will co-exist and do nothing. For those who are installing this the first time, the squashed migration will be chosen. I think the same PR can both squash the migrations and delete the old ones. Any complexity this represents on behalf of external users of this module will have to be dealt with by those users. This is a normal, expected operation for Django. We may see users come in submitting issues that have themselves in some sort of pickle, and we can just guide them appropriately if that happens, though I suspect even that is unlikely. I trust Django's migration process.
I am assuming this is in relation to my question about
ruff
andpytest
? I understand your point. To me these are two separate concerns. One is linting, the other is tests. Sinceruff
andruff-format
are run as part ofpre-commit
job it is redundant to run it again viapytest
.
Ahh, I see what you mean. I added pytest-ruff
, which I totally forgot about. I think the appropriate action is to just remove pytest-ruff
completely, since pre-commit
makes that redundant. That way, any expansion of pre-commit
will be completely covered, and we won't have to add any more cruft to the test suite.
I wanted to capture warnings as errors in
pytest
so that deprecation notices are exposed as early as possible.
I think this may be a mistake. Tests are supposed to catch errors. A deprecation warning isn't an error, just a warning of some sort of potential future consequence. Deprecations should be handled on a case-by-case basis, and we don't want tests failing for reasons that aren't actually failures. This reduces the value of tests:
All that is to say, a PR should only be merged if tests pass. That should be a hard line. I would hate to see a situation where a PR is accepted with failing tests.
Thanks!
Removing pytest-ruff
: #284
Squashing migrations: #285
I think this may be a mistake. Tests are supposed to catch errors. A deprecation warning isn't an error, just a warning of some sort of potential future consequence. Deprecations should be handled on a case-by-case basis, and we don't want tests failing for reasons that aren't actually failures. This reduces the value of tests:
- Tests should illustrate that the code changes made do not cause breaking changes
- Tests failures should never become "noise": If a failure happens, it needs to be actionable inside the PR that the test was run, not require a follow-up PR to address
You raise some good points. The problem that treating warnings as errors solves is that otherwise these warnings are never visible anywhere and therefore will be missed. A lot of the warnings from this package I found out about because of treating them as errors in our project which subsequently made me do the corresponding PR here to address them.
Unless there is another way I don't know about that can make them visible without causing the problems that you outline. However, these errors should only appear if we add support for a new version of Python and/or Django which would be in its own PR.
@mschoettle I think I've been swayed. One scenario where this would be pretty valuable: Say a contributor adds a third-party package that itself raises Django deprecation warnings (my professional project currently has several packages in this condition). We'd immediately catch that as an error, causing tests to fail, and that gives us clear evidence to reject a contribution.
With #285 being merged I brought this PR up to date and the tests now pass (no more deprecation warnings).
@jheld Please review
Can you rebase first?
@jheld Done. I also had to update the poetry lock file (it was out of date). I left that as a separate commit.
Make
pytest
treat warnings as errors to catch problems as early as possible. For example, Django deprecation warnings. This should help in keeping up with compatibility for future Django versions.