Open asekretenko opened 3 years ago
Apparently this flake is rather old, and dates back to some point before the 0.18.0 release.
The earliest flake of this kind that I could extract from Circle CI: https://app.circleci.com/pipelines/github/kudobuilder/kudo/5544/workflows/8743fa9f-aaee-46dd-9b8c-d489a2bcc672/jobs/16762
This doesn't mean there are no older flakes of this kind; just viewing them manually is becoming the more and more difficult due to Circle CI limitations.
What happened: Numerous failures in CI accompanied by errors like
https://app.circleci.com/pipelines/github/kudobuilder/kudo/5632/workflows/19c70868-710c-4fda-8d2e-34e04f9cfd8b/jobs/17186 https://app.circleci.com/pipelines/github/kudobuilder/kudo/5610/workflows/8f2031a0-9b99-4c5b-813b-2afbd775210c/jobs/17084 https://app.circleci.com/pipelines/github/kudobuilder/kudo/5625/workflows/47210bae-fe4a-4118-96de-cf8f6d93c11b/jobs/17151
Scheduler log looks this way (note: this is a COMPLETE log, nothing happens after acquiring lease):
What you expected to happen: Scheduler to always correctly authenticate, set watches and schedule pods; the tests not to fail as a result.
How to reproduce it (as minimally and precisely as possible): No idea yet. I would give a good breakfast to know that.
Anything else we need to know?: I would not be surprised if this is in fact a more general kuttl bug (but why do other test suites not suffer then? or maybe they also do?) or an even more general cluster bootstrapping bug.
There is another flake in upgrade-tests (https://github.com/kudobuilder/kudo/issues/1736), most likely they are not related.
Environment:
Kubernetes version (use
kubectl version
): v1.19.1Kudo version (use
kubectl kudo version
): Current master (b5c78dd44ebe9f8122e45c232b528090a6a7ce78)Operator:
operatorVersion:
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release):
Kernel (e.g.
uname -a
):Install tools:
Others: