Closed nawarnoori closed 2 years ago
Having the same issue. How do you downgrade to an older version?
@qglover @nawarnoori hello! Have you tried recommendations from #1619? Thanks!
I've got the same issue. Following things in #1619 it looks like its the library upgrade that caused the issues. Rolling k9s
back to 0.25.18
works for me.
Edit: Are we likely to see an update that fixes EKS support? Seems like a bad idea to expect every EKS user to either downgrade or work around this issue.
Having the same issue. How do you downgrade to an older version?
Grab it from here for your OS.
@qglover @nawarnoori hello! Have you tried recommendations from #1619? Thanks!
Okay, that seems to have done it, thank you for the suggestion.
But I am not entirely sure of the implications of upgrading the AWS CLI and also updating my k8s config so I echo what techdragon says about supporting users for whom this worked before.
Do feel free to close if the official advice is to upgrade aws for newer versions of k9s, though a diagnostic would be helpful here. Perhaps even reversion it since going from 0.25.18 to 0.25.21 suggests a patch, rather than a breaking change.
Describe the bug Unable to connect to my production cluster using the latest version of k9s (0.25.21). We're using EKS,
version
says:To Reproduce Steps to reproduce the behavior:
k9s --context prod
to connect to our prod clusterExpected behavior I should be able to connect to my prod cluster and see all its pods
Versions (please complete the following information):
Additional context I downgraded to 0.25.18 which is the latest version that does not exhibit this issue.
Running k9s with logs (
k9s -l debug --context prod
) I get:In contrast, 0.25.18's logs (the latest version that works for me) look something like: