Closed kingdonb closed 2 years ago
@stefanprodan Please let me know if you would find anything problematic or if anything important is missing here! 🙏
I will have to follow up and investigate:
ci / e2e-testing (13_* 14_*) (pull_request) Failing after 6m — e2e-testing (13_* 14_*)
This failure seems to be consistent, and might have been caused through one of these upgrades.
In: 83132d364e62da3d2946ec7d6b695d2b5fa46751
I reverted the upgrade to weaveworks/common
which was apparently a lucky guess – I think I've seen this failure before in a prior attempt to upgrade everything, so maybe just good memory... and it seems this fixes the e2e-testing (13_* 14_*)
failures that were in the first attempt at passing this PR.
I've rebased and signed off in ef1cfe7c32, going to run the tests again (and hopefully we see that e2e 13,14 are consistently passing now, instead of consistently failing.)
This looks to be passing, (now really ready for review.)
Fix #3587
Only the upgrades which did not seem to break anything, when I was testing at
kingdonb/flux
– I accepted basically every suggestion from Renovate bot, except for client-go which is blocked from upgrading until there are no supported K8s releases that remain in service, that can still work on non-v1 CRD. Flux must support those v1.21 and earlier as long as they are in use to remain backwards compatible.NB: it is still possible that I have broken something, it is difficult to tell flaky tests from actual failing tests. I will follow up on any failing tests, but I also need help from users to know if there are failures that weren't covered by any tests. (I'm sure you'll tell me!)
I am working on a local test scaffold so that I can see whether the tests pass or fail when ran against a local instance, although I have learned it's possible to get an SSH session to test failures when they happen at CircleCI, I am not sure if that will help, if the issue is transient or not, I should be able to reproduce it locally just as well.