Open dhouhamaa opened 1 year ago
I can confirm this error as well. Even the example from this page (https://kuttl.dev/docs/kuttl-test-harness.html#run-the-tests) results in an error in a live cluster:
logger.go:42: 09:43:07 | example-test | Deleting namespace: kuttl-test-grown-chow
case.go:114: timed out waiting for the condition
=== CONT kuttl
harness.go:405: run tests finished
harness.go:513: cleaning up
harness.go:570: removing temp folder: ""
--- FAIL: kuttl (541.82s)
--- FAIL: kuttl/harness (0.00s)
--- FAIL: kuttl/harness/example-test (527.91s)
FAIL
I have the same problem but observed that the test namespace generated by kuttl takes some time to delete . In my case, this is due to finalizers being run to clean up the resources deployed during my tests. When i remove the resources with a TestStep using the delete
property and let it sleep long enough so the finalizers are done before kuttl deletes the namespace, my tests succeed.
I suppose the Deleting namespace: kuttl-test-dashing-buffalo
step performed by kuttl fails because it hits a narrow timeout and then logs line casge.go114: timed out waiting for the condition
. I am no go developer, but https://github.com/kudobuilder/kuttl/blob/7e783766c9b15837934f8f98137140cf87929f2c/pkg/test/case.go#L115C53-L115C53 seems to match what i think happens here. Adding --skip-delete
does indeed let the test pass, but now i have to clean up myself.
I would suggest a configurable timeout for namespace deletion could help here, maybe as optional property on the kuttl TestSuite and as command line flag.
According to this line, the same timeout value used for the TestSuite configuration is used for the cleanup process. You can have a look at the docs here.
I can confirm that it works since I had the exact same issue and creating a kuttl-tests.yaml
and specifying the timeout
value solved the issue.
Note: Don't forget to reference your kuttl-tests.yaml
file with the flag --config kuttl-tests.yaml
in your command.
What happened: I use Kuttl to validate the creation of certain k8s custom resources. The first 3 test cases are run in Host cluster and succeeds. In the last testStep I generate another kubeconfig for a vcluster where I want to run the last testAssert using --kubeconfig. Running the test I see all of them success and for each of them I see "test step completed" , but the end result of kuttl is "case.go:114: timed out waiting for the condition === CONT kuttl harness.go:405: run tests finished harness.go:513: cleaning up harness.go:570: removing temp folder: "" --- FAIL: kuttl (544.85s) --- FAIL: kuttl/harness (0.00s) --- FAIL: kuttl/harness/create (537.77s) FAIL"
What you expected to happen: As long as all conditions are met during assert and all test are completed and not failed , I expect the end result of test to be "Pass" How to reproduce it (as minimally and precisely as possible): create another kubeconfig during one of the teststeps and run the assert using "--kubeconfig" Anything else we need to know?: The result becomes "Pass" if I omit the latest step which is using another Kubeconfig. Environment:
kubectl version
):kubectl kuttl version
):uname -a
):