kubermatic / kubermatic

Kubermatic Kubernetes Platform - the Central Kubernetes Management Platform For Any Infrastructure
https://www.kubermatic.com
Other
1.07k stars 159 forks source link

Cluster Template wrong Cloud Provider Tags set #11497

Closed toschneck closed 1 year ago

toschneck commented 1 year ago

What happened?

After creating multiple instances of a cluster template via the KKP Dashboard, each cluster get ALL tags of all clusters, so luckily at AWS we run in this problem:

           status code: 400, request id: 04b21908-9b6c-4847-b69e-88fd664714c3
  Warning  ReconcilingError  5m37s  kkp-cloud-controller  failed cloud provider init: failed to tag resources (one of securityGroup (sg-007c80f7196cfa3dd), routeTable (rtb-0aabe47c83e955022) and/or subnets ([subnet-0541a13d0cb8870d7 subnet-013a6f4bc950be649 subnet-01799929588f00df3])): TagLimitExceeded: the TagSet: '{ Name=kubermatic-run-az-b, env=kubermatic-run, kubernetes.io/cluster/29w85lg8kr=, kubernetes.io/cluster/2tq8b94zjs=, kubernetes.io/cluster/4pplgqzgfh=, kubernetes.io/cluster/4wnvmt9zqx=, kubernetes.io/cluster/4z8glch7jq=, kubernetes.io/cluster/5flwcc7kb4=, kubernetes.io/cluster/5kslkj8jrj=, kubernetes.io/cluster/65mgrfdt74=, kubernetes.io/cluster/69wkwxnmhm=, kubernetes.io/cluster/7bqmqkmwjq=, kubernetes.io/cluster/7dk5zzsrf5=, kubernetes.io/cluster/7lx69slv5m=, kubernetes.io/cluster/8kh4d8cvn2=, kubernetes.io/cluster/b7x2kpwfzk=, kubernetes.io/cluster/bj65gkc77r=, kubernetes.io/cluster/c9v87mzc4x=, kubernetes.io/cluster/df9fl7pbx5=, kubernetes.io/cluster/dnxt2zf92h=, kubernetes.io/cluster/drc4n4fkn6=, kubernetes.io/cluster/f5vqdfxhxw=, kubernetes.io/cluster/fqdqkqkwtz=, kubernetes.io/cluster/gprrsflqsw=, kubernetes.io/cluster/gvll7vc89k=, kubernetes.io/cluster/gwblmxrzmf=, kubernetes.io/cluster/gwnxfmnsrq=, kubernetes.io/cluster/hvxbbjtg6n=, kubernetes.io/cluster/jmh77j69p2=, kubernetes.io/cluster/k5l462hf27=, kubernetes.io/cluster/kdwqnmgq72=, kubernetes.io/cluster/l8ffjcvk8x=, kubernetes.io/cluster/ltjx6v5qk2=, kubernetes.io/cluster/np6v9mfgpw=, kubernetes.io/cluster/onb8jdfb7t=, kubernetes.io/cluster/oqvg9bf7gx=, kubernetes.io/cluster/ovfmk5dw7b=, kubernetes.io/cluster/pfzvv5pxlp=, kubernetes.io/cluster/q5pl45z9wf=, kubernetes.io/cluster/qtrj4bnqdx=, kubernetes.io/cluster/r6tjv2lbft=, kubernetes.io/cluster/r9s9qjvshd=, kubernetes.io/cluster/rvcmglwjj4=, kubernetes.io/cluster/sc88qknsbr=, kubernetes.io/cluster/sjjtqrr777=, kubernetes.io/cluster/tdzhz7svps=, kubernetes.io/cluster/x4g7gvqrls=, kubernetes.io/cluster/x99l6vxbkw=, kubernetes.io/cluster/xlprwpr5m5=, kubernetes.io/cluster/z48h9jqczb=, kubernetes.io/cluster/z5gn6t7x8p= }' contains too many Tags
           status code: 400, request id: fc1f480d-d474-463e-8a89-e26558bc7bd1

the cluster object don't contain the tags :-/

kget-exp cluster
address:
  adminToken: ""
  externalName: ""
  internalURL: ""
  ip: ""
  port: 0
  url: ""
apiVersion: kubermatic.k8c.io/v1
kind: Cluster
metadata:
  annotations:
    kubermatic.io/aws-region: eu-central-1
    kubermatic.io/initial-machinedeployment-request: '{"name":"sig-aws-small-node","creationTimestamp":"0001-01-01T00:00:00Z","spec":{"replicas":2,"template":{"cloud":{"aws":{"instanceType":"t3a.medium","diskSize":25,"volumeType":"standard","ami":"","tags":null,"availabilityZone":"eu-central-1b","subnetID":"subnet-0541a13d0cb8870d7","assignPublicIP":true,"isSpotInstance":true,"spotInstanceMaxPrice":"","spotInstancePersistentRequest":false}},"operatingSystem":{"ubuntu":{"distUpgradeOnBoot":false}},"versions":{"kubelet":""}}},"status":{}}'
    kubermatic.k8c.io/migrated-aws-node-termination-handler-addon: "yes"
    presetName: kubermatic-com
    user: tobias.schneck@kubermatic.com
  labels:
    is-credential-preset: "true"
    project-id: f5pqp4df82
    purpose: scale-testing
    template-instance-id: f5pqp4df82-sig-aws-small-cluster-fb8dh25f98
    type: demo
  name: c9v87mzc4x
spec:
  cloud:
    aws:
      credentialsReference:
        name: credential-aws-c9v87mzc4x
        namespace: kubermatic
      instanceProfileName: kubernetes-c9v87mzc4x
      nodePortsAllowedIPRanges:
        cidrBlocks:
        - 0.0.0.0/0
      roleARN: kubernetes-c9v87mzc4x-control-plane
      routeTableID: rtb-0aabe47c83e955022
      securityGroupID: sg-007c80f7196cfa3dd
      vpcID: vpc-01c49dc069e4b0131
    dc: aws-eu-central-1a
    providerName: aws
  clusterNetwork:
    dnsDomain: cluster.local
    ipFamily: IPv4
    ipvs:
      strictArp: true
    nodeCidrMaskSizeIPv4: 24
    nodeLocalDNSCacheEnabled: true
    pods:
      cidrBlocks:
      - 172.25.0.0/16
    proxyMode: ipvs
    services:
      cidrBlocks:
      - 10.240.16.0/20
  cniPlugin:
    type: canal
    version: v3.23
  componentsOverride:
    apiserver:
      nodePortRange: 30000-32767
      replicas: 2
    controllerManager:
      replicas: 1
    etcd:
      clusterSize: 3
      diskSize: 5Gi
    scheduler:
      replicas: 1
  containerRuntime: containerd
  enableOperatingSystemManager: true
  enableUserSSHKeyAgent: true
  exposeStrategy: NodePort
  features:
    apiserverNetworkPolicy: true
  humanReadableName: sig-aws-small-cluster-c9v87mzc4x
  kubernetesDashboard:
    enabled: true
  opaIntegration:
    enabled: true
    webhookTimeoutSeconds: 10
  pause: false
  version: 1.23.14

As tags are also in AWS are use for LoadBalancer assignement, this could lead to weired behavours in cloud controller manager and so on.

Expected behavior

The cluster creation works in the same way as I would create a intance out of the UI. No multiple tags get assigned.

How to reproduce the issue?

Create 15 Instances out of a AWS cluster tempalte

How is your environment configured?

Provide your KKP manifest here (if applicable)

```yaml # paste manifest here apiVersion: kubermatic.k8c.io/v1 clusterLabels: is-credential-preset: "true" project-id: dl6kxzf8br type: demo credential: credential-aws-sig-cluster-template kind: ClusterTemplate metadata: annotations: kubermatic.io/initial-machinedeployment-request: '{"name":"sig-aws-small-node","creationTimestamp":"0001-01-01T00:00:00Z","spec":{"replicas":2,"template":{"cloud":{"aws":{"instanceType":"t3a.medium","diskSize":25,"volumeType":"standard","ami":"","tags":null,"availabilityZone":"eu-central-1b","subnetID":"subnet-0541a13d0cb8870d7","assignPublicIP":true,"isSpotInstance":true,"spotInstanceMaxPrice":"","spotInstancePersistentRequest":false}},"operatingSystem":{"ubuntu":{"distUpgradeOnBoot":false}},"versions":{"kubelet":""}}},"status":{}}' presetName: kubermatic-com labels: is-credential-preset: "true" name: sig-aws-small-cluster project-id: dl6kxzf8br scope: global name: sig-aws-small-cluster spec: cloud: aws: credentialsReference: name: credential-aws-sig-cluster-template namespace: kubermatic instanceProfileName: "" nodePortsAllowedIPRanges: cidrBlocks: - 0.0.0.0/0 roleARN: "" routeTableID: "" securityGroupID: "" vpcID: vpc-01c49dc069e4b0131 dc: aws-eu-central-1a providerName: aws clusterNetwork: dnsDomain: cluster.local ipFamily: IPv4 ipvs: strictArp: true nodeCidrMaskSizeIPv4: 24 nodeLocalDNSCacheEnabled: true pods: cidrBlocks: - 172.25.0.0/16 proxyMode: ipvs services: cidrBlocks: - 10.240.16.0/20 cniPlugin: type: canal version: v3.23 containerRuntime: containerd enableOperatingSystemManager: true enableUserSSHKeyAgent: true exposeStrategy: NodePort humanReadableName: sig-aws-small-cluster kubernetesDashboard: enabled: true mla: loggingEnabled: false monitoringEnabled: false opaIntegration: enabled: true webhookTimeoutSeconds: 10 pause: false version: 1.23.14 ```

What cloud provider are you running on?

Master on GCP, Cluster on AWS

What operating system are you running in your user cluster?

doesn't matter

Additional information

Executed at run-2.lab.kubermatic.io

kubermatic-bot commented 1 year ago

Issues go stale after 90d of inactivity. After a furter 30 days, they will turn rotten. Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubermatic-bot commented 1 year ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kubermatic-bot commented 1 year ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kubermatic-bot commented 1 year ago

@kubermatic-bot: Closing this issue.

In response to [this](https://github.com/kubermatic/kubermatic/issues/11497#issuecomment-1529126109): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.