Closed scottillogical closed 2 years ago
This looks like a duplicate of #2073.
For upgrading, have you tried using the Helm chart approach to install telepresence? By using it, the version remains the same in the cluster regardless of what client versions that are used.
@thallgren re: upgrades ill consider that, i shouldnt have mentioned the upgrade issues as they are tangental.
re: duplicate, feel free to close if that makes sense to you. this context issue sounds like an annoying symptom of the root issue but it can be worked around by always specifying the context with an alias alias tp="telepresence —context=my-k8s-cluster"
Closing as duplicate of #2073
This bug concerns how telepresence handles reading the kubectl context
Currently our developers use the default kubectl context with telepresence - i.e. they set a global kubectl context - (kubectl config current-context) and run teleprensence without specifying a context. We are attempting to move to not using a default context and requiring an explicit context for everything via --context. However when we made this change, adding --context to all developers telepresence commands resulted in telepresence reporting
"telepresence: error: connector.Connect: Cluster configuration changed, please quit telepresence and reconnect"
This effectively broke our dev worfklow and we had to revert back to using a kubectl default context.
Steps to reproduce
See logs for details (12 errors found): "/Users/scottschulthess/Library/Logs/telepresence/daemon.log" If you think you have encountered a bug, please run
telepresence gather-logs
and attach the telepresence_logs.zip to your github issue.Result: error Expected behavior: if you specify --context to be the same context you are already connect to via tp connect, it should be a noop
Currently I have only tested this on 2.4.4 (upgrading for us is complicated because of how tp handles multiple versions) but I didnt see anything in the changelog regarding explicit context handling