Open alexandrem opened 5 years ago
Thanks for opening this feature request -- I think this sounds like a pretty sweet idea. Admittedly, I haven't used different api clients in this manner before, so I'll need to read up on it a bit more. Hopefully without sounding to naive on how this works, my initial thought is that this could be done using pytest markers, e.g. something like
def test_case_1(kube):
"""Tests something using the default global kubeconfig."""
...
@pytest.mark.kubetest_config('./path/to/other/kubeconfig')
def test_case_2(kube):
"""Tests something using a different kubeconfig."""
...
Under the hood, what I'm envisioning is that on test setup, this would register the config with the test case metadata here: https://github.com/vapor-ware/kubetest/blob/ac4104e5a1ec4e5549981fe19fa523f1b5dda8e8/kubetest/plugin.py#L233-L244
The general process of that being that the test manager would create an ApiClient for each unique value passed to the marker in a test suite and cache it. Any test case with said marker would be configured to use those ApiClients transparently, so the test could just use the kube
fixture as it would before.
That's my initial thought, at least. I'm sure there are other ways it could be done. If that proposed usage sounds reasonable to you, I can start working towards its implementation.
Static markers will not do it for my use case, since I'm provisioning different clusters at runtime in pytest_generate_tests
and passing the cluster attributes to a cluster
fixture.
I could probably hack something with pytest_collection_modifyitems
though?
do each of your test cases run in its own runtime-provisioned cluster? Hacking something together with pytest_collection_modifyitems
seems like it could work, but I think I don't understand the use case well enough to be particularly insightful with what the best implementation could be
@edaniszewski The test cases receive a cluster object which is session scoped. So I have one runtime provisioned cluster per suite of tests that were parametrized dynamically.
After thinking about this a little more, I suppose one thing that could work is to allow a client to be set on the TestClient object (returned via the kubetest
fixture), so then tests are free to use their own client. This would put the onus on the user to update the test client, but it seems like the simplest way to implement this, from what I can see right now.
@pytest.fixture(scope='session')
def custom_api_client():
return generated_api_client
def test_something(kube, custom_api_client):
# Manually set the custom API client at the start of the test.
kube.api_client = custom_api_client
# Continue to use as you would otherwise.
kube.load_deployment(...)
thoughts?
I think this can work just fine.
Check out https://github.com/vapor-ware/kubetest/pull/144 -- I believe that should implement this feature in the most basic way. Let me know what you think!
In the context of parallelizing my tests more in the future, I'd like the ability for kubetest to work on different remote Kubernetes clusters during the same pytest run.
At the moment, the
kube
fixture relies on the global kubeconfig discovered by the kubernetes client module, as in:What I would like is the ability to instantiate a
api_client
somewhere and pass that down to the kube fixture somehow, so that it would for instance use this to list nodes:That definitely requires more thinking about how to instantiate many clients and keep them available for fixtures.
Right now I have a parent fixture that generates different kubernetes clients and use that in a child fixture for the rest of my tests. If I could pass the kubeconfig path to the kubetest client then I could return that and use a different fixture name later instead of the default
kube
. Or perhaps there's a better way to accomplish that and keep the standardizedkube
name.