Open josh-drapeau opened 5 years ago
Hi @not-josh !
Thanks for contributing :-)
I'm a bit surprised by this change. According to the Pytest fixtures documentation, it doesn't seem like exceptions are propagated like this. See Fixture finalization / executing teardown code.
Pytest won't run a fixture's teardown if the initial setup fails for any reason. A try/finally block will still pass the exception upward while ensuring it's teardown code is still ran.
Alternatively (and likely cleaner), this can also be done utilizing the addfinalizer function of the pytest fixture subrequest. In the example below, I've added the request as a parameter, moved the teardown code into it's own function, and added that function the request using addfinalizer. What do you think?
Edit: I've updated my changes to reflect the code snippet below.*
@pytest.fixture(scope='session')
def docker_services(
request, docker_compose_file, docker_allow_fallback, docker_compose_project_name
):
"""Ensure all Docker-based services are up and running."""
def _cleanup():
# Clean up.
docker_compose.execute('down -v')
docker_compose = DockerComposeExecutor(
docker_compose_file, docker_compose_project_name
)
# If we allowed to run without Docker, check it's presence
if docker_allow_fallback is True:
try:
execute('docker ps')
except Exception:
# Run against localhost
yield Services(docker_compose, docker_allow_fallback=True)
return
# If failure happens beyond this point, run cleanup
request.addfinalizer(_cleanup)
# Spawn containers.
docker_compose.execute('up --build -d')
# Let test(s) run.
yield Services(docker_compose)
Hello @butla!
Raising an exception during or after docker_compose.execute('up --build -d')
and before our yield
will result in your docker images staying up and running without the use of a finalizer. The docker images will clean themselves up with a defined finalizer telling them to.
Part of this is because docker-compose
is being ran with the -d
(detached) flag. Normally tossing up
a SIGINT will result in a cleanup, however it is expected for you to manually clean things up using down
while running it detached.
As for up
running alongside down
... The way subprocess.check_output
behaves will ensure that up
will not be ran in parallel to down
. Once our Python process runs into a KeyboardInterrupt (or any other error) check_output will kill the background process it's waiting on using SIGKILL, ensuring the process dies immediately before raising whatever exception occurred.
Running docker-compose down
is also pretty safe -- even if our up
command failed halfway through! down
will only try to remove things created by the Compose file and does not raise any errors if some (or all) images aren't running. It'll complain, but still no errors!
Also, I've fixed the pep8 errors with the tests, however they're still broken due to unrelated changes. :(
@not-josh I see the breakages. Got a solution for that but it raises another question - I'll link the issue/PR.
As for this PR in itself - I'll try to come up with code to illustrate the problem I described, maybe I'll have the time in the next two days.
Issue, leading to the code fix and some thoughts - #35
Is this still valid, please?
Trying to kill pytest with SIGINT (CTRL+C) causes docker, or it's network bridges, to stay alive. I discovered this when one of the surviving network bridges killed my internet connection entirely. At other times, my computer's performance will drop from leftover docker containers running. Life is hard for the impatient!
While this will not keep someone mashing ctrl+c safe, I've found this small change reduces the possibility of leftover waste without a noticeable performance cost.