Closed gabrieldemarmiesse closed 9 months ago
Before I take a proper look, did you take a look at https://github.com/gabrieldemarmiesse/python-on-whales/pull/522? I think this would be a nice precursor PR, with the focus on moving to using the ctr_client
fixture without any distraction of parameterisation.
Before I take a proper look, did you take a look at https://github.com/gabrieldemarmiesse/python-on-whales/pull/522? I think this would be a nice precursor PR, with the focus on moving to using the ctr_client fixture without any distraction of parameterisation.
Indeed I did take a look at it, that's where I took most of this code from :)
Overall this approach seems fine, I'm not particularly convinced it's better than what I worked on (and the case of tests only supported by one runtime needs thinking about), but at a certain point we should just settle on something - better to have something available that works!
I think we should try to have pytestmark = @pytest.mark.parametrize("ctr_client", [docker_client, podman_client], indirect=True)
in as many files as possible. Not having this means there are some weird test present that don't work with all clients. I'm even open to divide them into multiple files so that it's easy to track. What do you think?
I think we should try to have
pytestmark = @pytest.mark.parametrize("ctr_client", [docker_client, podman_client], indirect=True)
in as many files as possible. Not having this means there are some weird test present that don't work with all clients. I'm even open to divide them into multiple files so that it's easy to track. What do you think?
I'm in favour of marking parameterisation at a per-file level, it feels like one step towards my preference for the default of parameterising and removing some of the verbosity/explicitness. In fact this seems like a nice compromise given it will often be whole subcommands that are only supported by one of docker/podman (buildx, swarm, compose, pod, ...).
As I was stating before, one of my concerns is that someone might add a new test file and miss that they should be parameterising the new tests, however I think if they were to try using ctr_client
it would complain about request.param
not being available if there was no parameterisation?
Not having this means there are some weird test present that don't work with all clients.
Could you elaborate on this?
I know a bunch of tests currently don't work with podman for reasons such as a difference in convention in image naming, as well as some subcommands not being supported by podman. At some point I could look into getting more tests working with podman, e.g. the image tests could probably be made more generic (mark as xfail for now).
I think if they were to try using ctr_client it would complain about request.param not being available if there was no parameterisation?
Indeed I don't see how it would work.
Could you elaborate on this?
If one single test can't handle all clients, then it means that we can't use pytestmark, thus making it easier to spot missing features.
@LewisGaul I added the xfail as requested. I believe we can merge this PRs unless you have other recommendations. When the whole test_image.py
has been parametrize, I expect that is should be possible to use pytestmark. And later, it should be possible to add if self.client_config.client_type == "podman": ...
in the right places in the codebase to fix the xfails of podman. Better support for podman coming right up :)
My pleasure!
To select all podman tests:
pytest -m podman
, same for docker. To change the podman executablepytest --podman-exe=podman4.2
. Same for docker. If podman is not present on the system, all podman tests are SKIPPED. Same with docker.There are no global variables here except the params but I think it's reasonnable as they're only constant strings.
@LewisGaul I know we won't agree with the fact that there is no default parameters in the unit tests, but can you still take a look? Most of the code in this PR comes from your previous PRs on the subject.
The result looks like this: