Open josiah-lunit opened 8 months ago
For what it's worth, I regularly use the databases that remain after a given test failure to debug the state of the database at the time of the failure.
What's your usecase for using a docker database that's not managed by PMR itself?
For what it's worth, there is a pmr
utility CLI included with the library which lets you do pmr postgres
to spin up a database out of band from the tests, for reducing initial test startup latency. if that's why.
And for that reason I know it should work with a preexisting database, so it makes me think you might have something else going on in the source db. Can you include the traceback of the failure?
Debugging the reason for the request above aside, I'm not against an enableable feature to delete the databases after the test completes. I dont think that ought to be a problem.
Alternatively or additionally, we could add a pmr postgres --prune-databases
(or the like) helper which basically lists the databases and deletes pytest_mock_resource_db_{x}
matching databases.
Just want to get a sense for whether any particular way would be best for your usecase, given any of the above context
Hey Dan, thanks for getting back to me. We use an existing docker setup to bootstrap the application for testing interactively and development, and to test during CI testing (the reason for that is long-winded, but basically to avoid a docker-in-docker scenario). In CI this isn't an issue since the images are fresh every time.
Personally during development I lean on databases managed by PMR itself (and by extension the tests I write for it), but occasionally I will be doing testing interactively or another member of our team who is already running the application will want to run tests against it. As such this kind of littering isn't a major problem in any way, but it is annoying if you have run say 50 testing runs against your long-running databases since you will end up with 50 templates and 50 databases (which then makes debugging a little challenging since you need to figure out which template and which database to look at).
For the moment, I've added code to our testing suite that cleans up after PMR, deleting all the pytest_mock_resource_db_{x}
and pmr_template_pg_{hash}
databases and templates, but it would be nice if I could just toggle this on, and then toggle it off when I want to debug a failing test case. Doing this is also a little bit non-trivial during testing since you need to be careful about when you attempt to drop tables and when they might be still in use during the lifecycle of the testing framework. Having PMR do it would let you not worry about when the right time to drop the old databases is.
Finally, there is an issue where if you decide to change the configuration of PMR, it will fail with database exists errors if you disable templating after it has been run with it enabled, which feels like it shouldn't happen.
I dont seem to have any problem flip flopping between template_database=True/False
when using pmr postgres
as the mechanism for spinning up the database, which is really just a simple docker run postgres:<whatever tag>
for the most part.
I thing we should be able to reliably clean up pytest_mock_resource_db_{x}
databases.
Less so template databases, we would need some way to know when to delete the template. I could almost imagine creating a one additional (dynamically created) session scoped fixture per fixture that handled creating the template and cleaning up after it....
Less so template databases, we would need some way to know when to delete the template. I could almost imagine creating a one additional (dynamically created) session scoped fixture per fixture that handled creating the template and cleaning up after it....
FWIW, that's essentially what I need to do in order to manually clean up after it. I create a session scoped fixture attached to the database root, then once the other engines have been disposed of (this is known in my case since I attach the root database fixture as a dependency on the database engine fixture with the database open), call the cleanup function on the engine which deletes all the databases and templates.
Still working through the various CI failures, but I think there's a workable solution here somewhere.
I can't repro the CI test failures locally (and in fact the CI is running the tests twice, so it's succeeding on the first go in CI even...), so that's unfortunate.
Any chance you'd be interested in installing the candidate branch and testing to see if it fully solves your issue in the interim?
Describe the bug After running tests using
create_postgres_fixture
, the created databases and templates are not removed upon completion. This leads to the buildup of old entries in an existing database at best and at worst causes errors if you try to change the configuration (due to existing old databases). I think after the test concludes (before the engine is finally disposed) it should clean up the templates and mock databases it created. At the very least, this behavior should be toggleable.Environment
To Reproduce Steps to reproduce the behavior:
create_postgres_fixture
using an existing running docker postgres database. Do this multiple times.psql
pytest_mock_resource_db_
databases.Expected behavior After running tests using an existing docker container, there should not be any remaining databases. Moreover, changing
template_database
back to True should not still fail.Actual Behavior Every test run creates new databases and they persist. Changing
template_database
back to True after running a Test with it at False causes a database already exists error.Additional context This problem only exists if you use an existing docker database instead of one generated by pytest mock resources.