kubernetes / kops

Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
https://kops.sigs.k8s.io/
Apache License 2.0
15.92k stars 4.65k forks source link

kops update changes port with an NLB loadbalancer #10798

Closed mtb-xt closed 3 years ago

mtb-xt commented 3 years ago

1. What kops version are you running? The command kops version, will display this information. Version 1.19.0 (git-04d36d7d92c72601efd918877fc180c846129ffb)

2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag. v1.19.7

3. What cloud provider are you using? AWS

4. What commands did you run? What is the simplest way to reproduce this issue? kops update cluster --target terraform --out terraform/stage/

5. What happened after the commands executed? After kops update, the cluster becomes unreachable, because kops changes the port used to talk to the cluster (even though the release note states that kops no longer will export the kubeconfig file... :facepalm: )

[hawara@phoenix kops]$ colordiff -u /tmp/kubeconfig-before ~/.kube/config
--- /tmp/kubeconfig-before  2021-02-12 14:32:10.009454895 +1300
+++ /home/hawara/.kube/config   2021-02-12 14:32:26.597961168 +1300
@@ -11,7 +11,7 @@
 - cluster:
     certificate-authority-data: [TOPSECRETLOL]
-    server: https://api.k8s.stage.ololo.co:8443
+    server: https://api.k8s.stage.ololo.co

6. What did you expect to happen? for kops to not touch the entry. This problem can be fixed by running kops export kubecfg --admin or by running update command with a flag: kops update cluster --target terraform --out terraform/stage/ --create-kube-config=false

7. Please provide your cluster manifest. Execute kops get --name my.example.com -o yaml to display your cluster manifest. You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2019-07-01T22:39:27Z"
  generation: 26
  name: k8s.stage.ololo.lolo
spec:
  addons:
  - manifest: kube-state-metrics
  api:
    loadBalancer:
      class: Network
      crossZoneLoadBalancing: true
      idleTimeoutSeconds: 1300
      sslCertificate: [TOPSECRET]
      type: Internal
  authentication:
    aws: {}
  authorization:
    rbac: {}
  channel: stable
  cloudConfig:
    openstack:
      blockStorage:
        createStorageClass: false
  cloudLabels:
    k8s.io/cluster-autoscaler/k8s.stage.ololo.lolo: k8s.stage.ololo.lolo
  cloudProvider: aws
  clusterAutoscaler:
    balanceSimilarNodeGroups: true
    enabled: true
    image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.19.1
    skipNodesWithLocalStorage: false
    skipNodesWithSystemPods: false
  configBase: s3://ololo-kops-state-stage/k8s.stage.ololo.lolo
  dnsZone: stage.ololo.lolo.
  etcdClusters:
  - etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-ap-southeast-2a
      name: a
    - encryptedVolume: true
      instanceGroup: master-ap-southeast-2b
      name: b
    - encryptedVolume: true
      instanceGroup: master-ap-southeast-2c
      name: c
    name: main
  - etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-ap-southeast-2a
      name: a
    - encryptedVolume: true
      instanceGroup: master-ap-southeast-2b
      name: b
    - encryptedVolume: true
      instanceGroup: master-ap-southeast-2c
      name: c
    name: events
  fileAssets:
  - content: |
      -----BEGIN PUBLIC KEY-----
      [TOPSECRET]
      -----END PUBLIC KEY-----
    name: sa-signer-pkcs8.pub
    path: /etc/kubernetes/pki/kube-apiserver/sa-signer-pkcs8.pub
    roles:
    - Master
  - content: |
      -----BEGIN RSA PRIVATE KEY-----
      [TOPSECRET]
      -----END RSA PRIVATE KEY-----
    name: sa-signer.key
    path: /etc/kubernetes/pki/kube-apiserver/sa-signer.key
    roles:
    - Master
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeAPIServer:
    apiAudiences:
    - sts.amazonaws.com
    disableBasicAuth: false
    enableAdmissionPlugins:
    - NamespaceLifecycle
    - LimitRanger
    - ServiceAccount
    - DefaultStorageClass
    - DefaultTolerationSeconds
    - MutatingAdmissionWebhook
    - ValidatingAdmissionWebhook
    - NodeRestriction
    - ResourceQuota
    - PodNodeSelector
    - PodTolerationRestriction
    serviceAccountIssuer: https://s3-ap-southeast-2.amazonaws.com/[TOPSECRET]
    serviceAccountKeyFile:
    - /etc/kubernetes/pki/kube-apiserver/sa-signer-pkcs8.pub
    - /srv/kubernetes/server.key
    serviceAccountSigningKeyFile: /etc/kubernetes/pki/kube-apiserver/sa-signer.key
  kubeDNS:
    provider: CoreDNS
  kubeProxy:
    metricsBindAddress: 0.0.0.0
  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook
    imagePullProgressDeadline: 30m0s
    maxPods: 300
  kubernetesApiAccess:
  - 10.0.0.0/8
  kubernetesVersion: 1.19.7
  masterInternalName: api.internal.k8s.stage.ololo.lolo
  masterPublicName: api.k8s.stage.ololo.lolo
  metricsServer:
    enabled: true
  networkCIDR: 10.[TOPSECRET].0.0/16
  networkID: vpc-[TOPSECRET]
  networking:
    flannel:
      backend: vxlan
  nodeTerminationHandler:
    enabled: true
  nonMasqueradeCIDR: 100.64.0.0/10
  rollingUpdate:
    maxSurge: 6
    maxUnavailable: 2
  sshAccess:
  - 10.0.0.0/8
  subnets:
  - cidr: 10.70.0.0/24
    id: subnet-[TOPSECRET]
    name: ap-southeast-2a
    type: Private
    zone: ap-southeast-2a
  - cidr: 10.70.10.0/24
    id: subnet-[TOPSECRET]
    name: ap-southeast-2b
    type: Private
    zone: ap-southeast-2b
  - cidr: 10.70.20.0/24
    id: subnet-[TOPSECRET]
    name: ap-southeast-2c
    type: Private
    zone: ap-southeast-2c
  - cidr: 10.70.100.0/24
    id: subnet-[TOPSECRET]
    name: utility-ap-southeast-2a
    type: Utility
    zone: ap-southeast-2a
  - cidr: 10.70.110.0/24
    id: subnet-[TOPSECRET]
    name: utility-ap-southeast-2b
    type: Utility
    zone: ap-southeast-2b
  - cidr: 10.70.120.0/24
    id: subnet-[TOPSECRET]
    name: utility-ap-southeast-2c
    type: Utility
    zone: ap-southeast-2c
  topology:
    dns:
      type: Public
    masters: private
    nodes: private
fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

mtb-xt commented 3 years ago

Well, unless someone fixed this and I didn't notice...

/remove-lifecycle stale

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

mtb-xt commented 3 years ago

/close

k8s-ci-robot commented 3 years ago

@mtb-xt: Closing this issue.

In response to [this](https://github.com/kubernetes/kops/issues/10798#issuecomment-925665496): >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.