Closed mtb-xt closed 3 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Well, unless someone fixed this and I didn't notice...
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/close
@mtb-xt: Closing this issue.
1. What
kops
version are you running? The commandkops version
, will display this information. Version 1.19.0 (git-04d36d7d92c72601efd918877fc180c846129ffb)2. What Kubernetes version are you running?
kubectl version
will print the version if a cluster is running or provide the Kubernetes version specified as akops
flag. v1.19.73. What cloud provider are you using? AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kops update cluster --target terraform --out terraform/stage/
5. What happened after the commands executed? After kops update, the cluster becomes unreachable, because kops changes the port used to talk to the cluster (even though the release note states that kops no longer will export the kubeconfig file... :facepalm: )
6. What did you expect to happen? for kops to not touch the entry. This problem can be fixed by running
kops export kubecfg --admin
or by running update command with a flag:kops update cluster --target terraform --out terraform/stage/ --create-kube-config=false
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest. You may want to remove your cluster name and other sensitive information.