jeancochrane / pytest-flask-sqlalchemy

A pytest plugin for preserving test isolation in Flask-SQLAlchemy using database transactions.
MIT License
255 stars 45 forks source link

Feature Request: support conditionally-skipping tests #30

Closed jayaddison closed 3 years ago

jayaddison commented 4 years ago

Hello and thank you for developing and maintaining this library! The ability to safely write unit tests that perform sqlalchemy operations within Flask applications is a breath of fresh air :)

This feature request is for a use case in which database connectivity isn't guaranteed to be available at test runtime, depending on the environment.

For situations like that it'd be convenient to decorate test cases as conditional upon database connectivity, and for those tests then to get skipped (instead of failed) when connectivity's not available.

I've made some attempts to handle this situation for one codebase, and have some findings and suggestions based on that, but have a sense that the approach may be non-ideal and would benefit from discussion and feedback.

One challenge is that the plugin's _transaction fixture attempts to create a database connection and transaction without handling errors at the moment (ref). That does make sense when connectivity is expected to be available, and provides a concise and fail-fast behaviour that's useful in most cases.

Another challenge is determining how test authors would opt-in to connectivity-conditional tests; unittest provides a skipUnless function that could provide a readable and comprehensible expression of 'skip this test unless a transaction object is available'. In practice, it seemed tricky to write a skip-condition function that would run before injection of test fixtures and that could also be migrated into the plugin in future.

Alternatively pytest supports custom markers that could similarly be used to decorate tests with preconditions.

This feature request is accompanied by a pull requests attempting to solve the problem via one proposed approach -- adding a custom pytest marker with autouse enabled as part of the project-under-test codebase -- accompanied by some naive error handling in the pytest-flask-sqlalchemy codebase. It would be possible to migrate the custom marker into the plugin codebase at a later date if this seems like a reasonable approach.

jeancochrane commented 4 years ago

Thanks for this writeup and the initial work! Can you give an example of a case where database connectivity wouldn't be guaranteed at test time? I'm having trouble getting an idea of when this feature would be useful. My instinct is that a unit test should either have database access or not, and indeterminacy would indicate a test that hasn't fully isolated the functionality it seeks to test.

The custom marker implemented in https://github.com/openculinary/backend/pull/20 seems like the right API if this is indeed a feature we want to support in core. I'm not quite sold on the exception handling in https://github.com/jeancochrane/pytest-flask-sqlalchemy/pull/31, however, because I think for the majority of use cases it's important to surface errors in the connection setup rather than suppressing them. Once we decide on the usefulness of this feature I'd be happy to think through some alternatives.

jayaddison commented 4 years ago

Thanks @jeancochrane - the use case is based on my developer workflow, which is evolving (and likely non-ideal; may contain assumptions/processes that could be improved).

At the moment there are generally a few different phases each branch/feature goes through:

  1. Local development without any network services enabled (postgres, elasticsearch, ...), but unit tests can run at any time - this is hopefully the bulk of the development loop for most tasks
  2. Local service-enabled development -- essentially a superset of the previous step
  3. Staging/integration testing -- as above, via continuous integration, and tests must pass before deploy
  4. Production

In short, allowing skippable tests like this allows the same test suite to be used locally and during continuous integration. If the developer would like to be completely sure that the skippable tests pass prior to CI, then they can spin up the local service-enabled environment.

For a while I considered a sort-of implicit assertion that the test suite shouldn't use network services; but there's one particularly involved/fragile area around PostgreSQL recursive querying that I think really deserves unit testing (and hence my discovery of pytest-flask-sqlalchemy :))

I suppose it's possible that it'd make sense to split out the service-dependent tests into a separate suite and run that separately. My aversion to that is simply to maintain exactly the same developer experience (make tests) between both local environments.

jayaddison commented 4 years ago

Good point and understood regarding the exception handling in #31 - it didn't feel like a particularly robust approach to simply return empty results. Glad to consider possibilities there if we do agree this is useful (relevant/custom exceptions, perhaps, if they can be caught by the decorators)

jeancochrane commented 4 years ago

In short, allowing skippable tests like this allows the same test suite to be used locally and during continuous integration. If the developer would like to be completely sure that the skippable tests pass prior to CI, then they can spin up the local service-enabled environment.

Gotcha, this is helpful! My next question would be: What motivates the goal of supporting local development without network services enabled? It seems to me that introducing conditionally-skippable tests in the environments outlined above would run the risk of non-deterministic runs in the service-enabled environments -- in particular I would worry about cases where network services were unavailable due to e.g. unexpected service outages, causing tests to pass where they shouldn't because the tests were erroneously skipped. (Maybe this particular situation could be mitigated with a config value or environment variable that enables/disables the conditional skipping.) More generally, the local dev environment without network services seems unnecessarily different from the production environment that I'm wondering what value it brings that would counteract increased uncertainty in tests.

Glad to consider possibilities there if we do agree this is useful (relevant/custom exceptions, perhaps, if they can be caught by the decorators)

Just left a thought on #31 to this end!

jayaddison commented 4 years ago

What motivates the goal of supporting local development without network services enabled? It seems to me that introducing conditionally-skippable tests in the environments outlined above would run the risk of non-deterministic runs in the service-enabled environments

It's primarily related to environment setup and resource consumption (the fewer dependencies and processes in the environment, the more responsive, affordable and energy-efficient the developer hardware can be), and secondarily related to development cycle iteration time (given the tendendency of tests that interact with network services to be slower in many cases -- not guaranteed of course).

That's a good point about the risk of mistakenly allowing a build to pass in the presence of infrastructure failures. It seems the problem is that in some environments (i.e. continuous integration / pre-production checks) we'd like the 'skippable' tests to be considered a 'fail' if network problems occur.

Of the options you suggested, I think environment variables sound preferable since they're generally straightforward to configure for most CI systems, and don't require any file-related operations and/or commits.

Re-using the FLASK_ENV variable could be one option if so, since we can assume the presence of flask; defined states for that var are an unset value or the value development -- the latter could imply the skippable mode). Or perhaps it's ungreat to bind a library-specific behaviour to an environment variable defined by another library, in which case a distinctly-named variable might be preferable.

jayaddison commented 3 years ago

After more consideration here: re-using FLASK_ENV seems like a not-great idea; that'd bind behaviour in pytest-flask-sqlalchemy to an unrelated concept in Flask.

There's a decorator implemented in this feature branch (in a downstream project) that demonstrates how a configuration-based approach could work alongside connection-time exception handling in #31.

I think that feels more consistent at the moment because pytest-flask-sqlalchemy already uses pytest configuration for setup.

jayaddison commented 3 years ago

Hold up: good news. I think there's an easier way to handle all of this. Experimenting with some changes to the approach at the moment.

In short: I think it's possible to move the marker conditional check into the _transaction fixture itself. That way there is no need for an autouse fixture, and no need for a configuration setting to enable connection exception handling.

jayaddison commented 3 years ago

Hopefully https://github.com/jeancochrane/pytest-flask-sqlalchemy/pull/42 provides a much simpler, more effective way to implement this feature.

In short, the check for the custom marker moves into the existing _transaction fixture. Any test which includes that fixture -- directly, or indirectly via the other pytest-flask-sqlalchemy fixtures -- may encounter a database connection exception.

If an exception does occur at connection-time, then and only then will the fixture check for the presence of the requires_sqlalchemy_connection marker.

No autouse fixture is injected by the plugin, and no environment variables or pytest setup.cfg changes are required in the client application. All that they have to do is add the custom marker to any relevant test cases.

jayaddison commented 3 years ago

As noted in #42, we can consider this functionality out-of-scope for pytest-flask-sqlalchemy.