kube-hetzner / terraform-hcloud-kube-hetzner

Optimized and Maintenance-free Kubernetes on Hetzner Cloud in one command!
MIT License
2.39k stars 368 forks source link

Placement group contains already 10 servers #1157

Closed CroutonDigital closed 9 months ago

CroutonDigital commented 10 months ago

Description

My k8s cluster 10 nodes.

I want add 2 additionals nodes. When apply I got error:

│ Error: placement group 211529 contains already 10 servers (service_error)
│ 
│   with module.kube-hetzner.module.agents["1-4-agent-large"].hcloud_server.server,
│   on .terraform/modules/kube-hetzner/modules/host/main.tf line 22, in resource "hcloud_server" "server":
│   22: resource "hcloud_server" "server" {
│ 

I try enable placement_group_disable = true

  # module.kube-hetzner.module.control_planes["1-0-control-plane-nbg1"].hcloud_server.server will be updated in-place
  ~ resource "hcloud_server" "server" {
        id                         = "35985276"
        name                       = "h-k3s-test-control-plane-nbg1-gdj"
      - placement_group_id         = 192473 -> null
        # (18 unchanged attributes hidden)
    }

Plan: 0 to add, 8 to change, 0 to destroy.

but after apply not removed placement group from nodes. New VM added with no placement group.

May be need params for create placement group for each VM groups like:

    {
      name        = "agent-large",
      server_type = "ccx23",
      location    = "fsn1",
      labels      = [
        "nodetype=core-ccx23"
      ],
      taints      = [],
      count       = 4

    },

Kube.tf file

module "kube-hetzner" {
  providers = {
    hcloud = hcloud
  }
  hcloud_token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token
  source = "kube-hetzner/kube-hetzner/hcloud"
  version = "2.11.4"
  ssh_port = 2222
  ssh_public_key = file("${path.module}/ssh/k8s-hetzner.pub")
  ssh_private_key = file("${path.module}/ssh/k8s-hetzner")

  network_region = "eu-central" # change to `us-east` if location is ash
  network_ipv4_cidr = "10.0.0.0/8"
  cluster_ipv4_cidr = "10.42.0.0/16"

  control_plane_nodepools = [
    {
      name        = "control-plane-fsn1",
      server_type = "cpx21",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 1

    },
    {
      name        = "control-plane-nbg1",
      server_type = "cpx21",
      location    = "nbg1",
      labels      = [],
      taints      = [],
      count       = 1

    },
  ]

  agent_nodepools = [
    {
      name        = "agent-small",
      server_type = "cpx11",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 0

    },
    {
      name        = "agent-large",
      server_type = "ccx23",
      location    = "fsn1",
      labels      = [
        "nodetype=core-ccx23"
      ],
      taints      = [],
      count       = 4

    },
    {
      name        = "bots-large",
      server_type = "ccx23",
      location    = "fsn1",
      labels      = [
        "nodetype=bots-node"
      ],
      taints      = [],
      count       = 6
    },
    {
      name        = "agent-xsize",
      server_type = "ccx43",
      location    = "fsn1",
      labels      = [
        "nodetype=core-ccx43"
      ],
      taints      = [],
      count       = 0

    },
    {
      name        = "storage",
      server_type = "ccx23",
      location    = "fsn1",
      labels      = [
        "node.kubernetes.io/server-usage=storage"
      ],
      taints      = [],
      count       = 0

    },
    {
      name        = "egress",
      server_type = "cpx11",
      location    = "fsn1",
      labels = [
        "node.kubernetes.io/role=egress"
      ],
      taints = [
        "node.kubernetes.io/role=egress:NoSchedule"
      ],
      floating_ip = true
      count = 0
    },
    {
      name        = "agent-arm-small",
      server_type = "cax11",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 0
    }
  ]

  load_balancer_type     = "lb11"
  load_balancer_location = "fsn1"

   autoscaler_nodepools = [
     {
       name        = "autoscaled-small"
       server_type = "ccx23"
       location    = "fsn1"
       min_nodes   = 0
       max_nodes   = 0
     },
     {
       name        = "autoscaled-large"
       server_type = "ccx23"
       location    = "fsn1"
       labels      = {
         nodetype: "bots-node"
       }
       min_nodes   = 0
       max_nodes   = 6
     }
   ]

   ingress_controller = "traefik"
   traefik_additional_options = ["--log.level=DEBUG"]
  initial_k3s_channel = "stable"
  cluster_name = "h-k3s-test"

  k3s_registries = <<-EOT
    mirrors:
      eu.gcr.io:
        endpoint:
          - "https://eu.gcr.io"
    configs:
      eu.gcr.io:
        auth:
          username: _json_key
          password: '{
  "type": "service_account",
  "project_id": "asset-management-ci-cd",
  "private_key_id": "a4ccbc8eddbaea86d207ca85bc6482a288035c6d",
  "private_key": "-----BEGIN PRIVATE KEY-----
****
\n-----END PRIVATE KEY-----\n",
  "client_email": "image-puller@asset-management-ci-cd.iam.gserviceaccount.com",
  "client_id": "****",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://oauth2.googleapis.com/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/image-puller%40asset-management-ci-cd.iam.gserviceaccount.com",
  "universe_domain": "googleapis.com"
}'
  EOT

  restrict_outbound_traffic = true

   extra_firewall_rules = [
     {
       description = "Allow out tcp"
       direction       = "out"
       protocol        = "tcp"
       port            = "any"
       source_ips      = [] # Won't be used for this rule
       destination_ips = ["0.0.0.0/0", "::/0"]
     },
     {
       description = "Allow out udp"
       direction       = "out"
       protocol        = "udp"
       port            = "any"
       source_ips      = [] # Won't be used for this rule
       destination_ips = ["0.0.0.0/0", "::/0"]
     }
   ]

   cni_plugin = "cilium"
   placement_group_disable = true
   enable_cert_manager = false
   dns_servers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"]

  cilium_values = <<EOT
ipam:
  mode: kubernetes
k8s:
  requireIPv4PodCIDR: true
kubeProxyReplacement: true
routingMode: native
ipv4NativeRoutingCIDR: "10.0.0.0/8"
endpointRoutes:
  enabled: true
loadBalancer:
  acceleration: native
bpf:
  masquerade: true
socketLB:
  hostNamespaceOnly: true
egressGateway:
  enabled: true
MTU: 1450
EOT

  traefik_values = <<EOT
deployment:
  replicas: 1
globalArguments: []
service:
  enabled: true
  type: LoadBalancer
  annotations:
    "load-balancer.hetzner.cloud/name": "h-k3s-test"
    "load-balancer.hetzner.cloud/use-private-ip": "true"
    "load-balancer.hetzner.cloud/disable-private-ingress": "true"
    "load-balancer.hetzner.cloud/location": "nbg1"
    "load-balancer.hetzner.cloud/type": "lb11"
    "load-balancer.hetzner.cloud/uses-proxyprotocol": "true"

logs:
  general:
    level: DEBUG

ports:
  web:
    redirectTo: websecure

    proxyProtocol:
      trustedIPs:
        - 127.0.0.1/32
        - 10.0.0.0/8
    forwardedHeaders:
      trustedIPs:
        - 127.0.0.1/32
        - 10.0.0.0/8
  websecure:
    proxyProtocol:
      trustedIPs:
        - 127.0.0.1/32
        - 10.0.0.0/8
    forwardedHeaders:
      trustedIPs:
        - 127.0.0.1/32
        - 10.0.0.0/8

tlsOptions: {}
tlsStore: {}
tls:
  secretName: ******

certResolvers:
  letsencrypt:
    email: maxim@test.local
    tlsChallenge: true
    httpChallenge:
      entryPoint: "web"
    storage: /data/acme.json

  EOT

  /*   nginx_values = <<EOT
controller:
  watchIngressWithoutClass: "true"
  kind: "DaemonSet"
  config:
    "use-forwarded-headers": "true"
    "compute-full-forwarded-for": "true"
    "use-proxy-protocol": "true"
  service:
    annotations:
      "load-balancer.hetzner.cloud/name": "h-k3s-test"
      "load-balancer.hetzner.cloud/use-private-ip": "true"
      "load-balancer.hetzner.cloud/disable-private-ingress": "true"
      "load-balancer.hetzner.cloud/location": "nbg1"
      "load-balancer.hetzner.cloud/type": "lb11"
      "load-balancer.hetzner.cloud/uses-proxyprotocol": "true"
  EOT */

  /*   rancher_values = <<EOT
ingress:
  tls:
    source: "rancher"
hostname: "rancher.example.com"
replicas: 1
bootstrapPassword: "supermario"
  EOT */

}

provider "hcloud" {
  token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token
}

terraform {
  required_version = ">= 1.3.3"
  required_providers {
    hcloud = {
      source  = "hetznercloud/hcloud"
      version = ">= 1.43.0"
    }
  }
}

output "kubeconfig" {
  value     = module.kube-hetzner.kubeconfig
  sensitive = true
}

output "network_id" {
  value     = module.kube-hetzner.network_id
}

Screenshots

No response

Platform

Linux

mysticaltech commented 10 months ago

@CroutonDigital You are in luck, when you hare nodepools with count 0 at the end of a nodepool, you can remove those. Please do so and try again. Here the agent nodepools setup I propose, delete the line placement_group_disable = true this won't work, it need to be used from the get go.

Try this:

    agent_nodepools = [
    {
      name        = "agent-small",
      server_type = "cpx11",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 0
    },
    {
      name        = "agent-large",
      server_type = "ccx23",
      location    = "fsn1",
      labels      = [
        "nodetype=core-ccx23"
      ],
      taints      = [],
      count       = 6
    },
    {
      name        = "bots-large",
      server_type = "ccx23",
      location    = "fsn1",
      labels      = [
        "nodetype=bots-node"
      ],
      taints      = [],
      count       = 6
    }
  ]
mysticaltech commented 10 months ago

If that does not work, please give me the output of hcloud placement-group list and hcloud placement-group describe <placement-group-name> to understand better what is happening.

Note that this is how they are created and allocated:

resource "hcloud_placement_group" "agent" {
  count  = ceil(local.agent_count / 10)
  name   = "${var.cluster_name}-agent-${count.index + 1}"
  labels = local.labels
  type   = "spread"
}

 placement_group_id           = var.placement_group_disable ? null : hcloud_placement_group.agent[floor(index(keys(local.agent_nodes), each.key) / 10)].id

As you have a maximum of 10 nodes per placement group.

mysticaltech commented 10 months ago

The above PR should fix your issue @CroutonDigital, but the trouble is that it's probably not backward compatible. We will reserve it for our next major release v3.

In the meantime, please follow the guidance laid out previously to debug with the hcloud cli and playing with nodepools definitions carefully. Note that your first agent nodepool, even if it cannot be deleted, since you have a count of 0 already, you can change the node kind and name.

mysticaltech commented 10 months ago

@CroutonDigital If you clone the repo locally, checkout to thefix/placement-group-logic branch and point the module in kube.tf to that path while commenting out the version, you can run terraform plan to know if it will upgrade smoothly or not.

CroutonDigital commented 10 months ago

When I try comment block:

    {
      name        = "agent-small",
      server_type = "cpx11",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 0
    },

then terraform plan rebuild full cluster:

Plan: 56 to add, 0 to change, 57 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.
CroutonDigital commented 10 months ago

Lets try clone branch with fix, and try on my test cluster

CroutonDigital commented 10 months ago

I comment # placement_group_disable = true terraform apply with success status, now I have 11 nodes additional 3 nodes added to placement group h-k3s-test-agent-1, 1 additional node without placement group, see screenshot bellow.

I see new second placement group, empty.

❯ hcloud placement-group list
ID       NAME                         SERVERS      TYPE     AGE
192472   h-k3s-test-agent-1           10 servers   spread   146d
192473   h-k3s-test-control-plane-1   2 servers    spread   146d
294305   h-k3s-test-agent-2           0 servers    spread   7m
❯ hcloud placement-group describe h-k3s-test-agent-1
ID:             192472
Name:           h-k3s-test-agent-1
Created:        Tue Aug 15 11:53:46 +04 2023 (4 months ago)
Labels:
  provisioner: terraform
  engine: k3s
  cluster: h-k3s-test
Servers:
  - Server ID:          41502318
    Server Name:        h-k3s-test-bots-large-uzx
  - Server ID:          40744522
    Server Name:        h-k3s-test-bots-large-cos
  - Server ID:          40744523
    Server Name:        h-k3s-test-bots-large-pck
  - Server ID:          40745011
    Server Name:        h-k3s-test-agent-large-fsb
  - Server ID:          40745012
    Server Name:        h-k3s-test-agent-large-hsl
  - Server ID:          40751839
    Server Name:        h-k3s-test-agent-large-cwc
  - Server ID:          41651098
    Server Name:        h-k3s-test-bots-large-yev
  - Server ID:          41651101
    Server Name:        h-k3s-test-bots-large-rlv
  - Server ID:          41651102
    Server Name:        h-k3s-test-bots-large-wuf
  - Server ID:          41651100
    Server Name:        h-k3s-test-bots-large-lwz
Type:           spread
❯ hcloud placement-group describe h-k3s-test-agent-2
ID:             294305
Name:           h-k3s-test-agent-2
Created:        Mon Jan  8 11:58:05 +04 2024 (8 minutes ago)
Labels:
  engine: k3s
  cluster: h-k3s-test
  provisioner: terraform
Servers:
Type:           spread
❯ hcloud placement-group describe h-k3s-test-control-plane-1                   
ID:             192473
Name:           h-k3s-test-control-plane-1
Created:        Tue Aug 15 11:53:46 +04 2023 (4 months ago)
Labels:
  engine: k3s
  cluster: h-k3s-test
  provisioner: terraform
Servers:
  - Server ID:          35985276
    Server Name:        h-k3s-test-control-plane-nbg1-gdj
  - Server ID:          35985275
    Server Name:        h-k3s-test-control-plane-fsn1-yid
Type:           spread

Screenshot 2024-01-08 at 12 08 23

mysticaltech commented 10 months ago

@CroutonDigital Thanks for sharing, now try with the stable branch (your current version):

    agent_nodepools = [
    {
      name        = "agent-large-0",
      server_type = "ccx23",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 2
    },
    {
      name        = "agent-large",
      server_type = "ccx23",
      location    = "fsn1",
      labels      = [
        "nodetype=core-ccx23"
      ],
      taints      = [],
      count       = 4
    },
    {
      name        = "bots-large",
      server_type = "ccx23",
      location    = "fsn1",
      labels      = [
        "nodetype=bots-node"
      ],
      taints      = [],
      count       = 6
    }
  ]
mysticaltech commented 10 months ago

Utimately, we will need to implement the a one placement group per nodepool policy, but if the above temporarily fixes it for you, it would be great. Otherwise just add more nodepools at the end, after removing the empty onces.

mysticaltech commented 10 months ago

@Silvest89 FYI the above. If you have ideas on how to temporarily solve his issues, more than welcome, I'm running out of ideas.

CroutonDigital commented 10 months ago

One placement group per node pool it's seems a good idea

mysticaltech commented 10 months ago

@CroutonDigital Please at least post your terraform plan with the new branch.

maximen39 commented 10 months ago

A similar problem occurred when I tried to increase the count of agents Will I be able to wait for the release of PR so that my problem is solved?

CroutonDigital commented 10 months ago
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:

  # module.kube-hetzner.hcloud_placement_group.agent[1] will be created
  + resource "hcloud_placement_group" "agent" {
      + id      = (known after apply)
      + labels  = {
          + "cluster"     = "h-k3s-test"
          + "engine"      = "k3s"
          + "provisioner" = "terraform"
        }
      + name    = "h-k3s-test-agent-2"
      + servers = (known after apply)
      + type    = "spread"
    }

  # module.kube-hetzner.null_resource.agent_config["0-0-agent-small"] will be created
  + resource "null_resource" "agent_config" {
      + id       = (known after apply)
      + triggers = {
          + "agent_id" = (known after apply)
          + "config"   = (known after apply)
        }
    }

  # module.kube-hetzner.null_resource.agent_config["0-1-agent-small"] will be created
  + resource "null_resource" "agent_config" {
      + id       = (known after apply)
      + triggers = {
          + "agent_id" = (known after apply)
          + "config"   = (known after apply)
        }
    }

  # module.kube-hetzner.null_resource.agent_config["1-3-agent-large"] will be created
  + resource "null_resource" "agent_config" {
      + id       = (known after apply)
      + triggers = {
          + "agent_id" = (known after apply)
          + "config"   = (known after apply)
        }
    }

  # module.kube-hetzner.null_resource.agent_config["1-4-agent-large"] will be created
  + resource "null_resource" "agent_config" {
      + id       = (known after apply)
      + triggers = {
          + "agent_id" = (known after apply)
          + "config"   = (known after apply)
        }
    }

  # module.kube-hetzner.null_resource.agent_config["2-3-bots-large"] will be created
  + resource "null_resource" "agent_config" {
      + id       = (known after apply)
      + triggers = {
          + "agent_id" = (known after apply)
          + "config"   = (known after apply)
        }
    }

  # module.kube-hetzner.null_resource.agent_config["2-4-bots-large"] will be created
  + resource "null_resource" "agent_config" {
      + id       = (known after apply)
      + triggers = {
          + "agent_id" = (known after apply)
          + "config"   = (known after apply)
        }
    }

  # module.kube-hetzner.null_resource.agents["0-0-agent-small"] will be created
  + resource "null_resource" "agents" {
      + id       = (known after apply)
      + triggers = {
          + "agent_id" = (known after apply)
        }
    }

  # module.kube-hetzner.null_resource.agents["0-1-agent-small"] will be created
  + resource "null_resource" "agents" {
      + id       = (known after apply)
      + triggers = {
          + "agent_id" = (known after apply)
        }
    }

  # module.kube-hetzner.null_resource.agents["1-3-agent-large"] will be created
  + resource "null_resource" "agents" {
      + id       = (known after apply)
      + triggers = {
          + "agent_id" = (known after apply)
        }
    }

  # module.kube-hetzner.null_resource.agents["1-4-agent-large"] will be created
  + resource "null_resource" "agents" {
      + id       = (known after apply)
      + triggers = {
          + "agent_id" = (known after apply)
        }
    }

  # module.kube-hetzner.null_resource.agents["2-3-bots-large"] will be created
  + resource "null_resource" "agents" {
      + id       = (known after apply)
      + triggers = {
          + "agent_id" = (known after apply)
        }
    }

  # module.kube-hetzner.null_resource.agents["2-4-bots-large"] will be created
  + resource "null_resource" "agents" {
      + id       = (known after apply)
      + triggers = {
          + "agent_id" = (known after apply)
        }
    }

  # module.kube-hetzner.module.agents["0-0-agent-small"].data.cloudinit_config.config will be read during apply
  # (config refers to values not yet known)
 <= data "cloudinit_config" "config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = (known after apply)
          + content_type = "text/cloud-config"
          + filename     = "init.cfg"
        }
    }

  # module.kube-hetzner.module.agents["0-0-agent-small"].hcloud_server.server will be created
  + resource "hcloud_server" "server" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = [
          + 1016130,
        ]
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "143418034"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"     = "h-k3s-test"
          + "engine"      = "k3s"
          + "provisioner" = "terraform"
          + "role"        = "agent_node"
        }
      + location                   = "fsn1"
      + name                       = (known after apply)
      + placement_group_id         = 192472
      + rebuild_protection         = false
      + server_type                = "cpx11"
      + shutdown_before_deletion   = false
      + ssh_keys                   = [
          + "14381223",
        ]
      + status                     = (known after apply)
      + user_data                  = (known after apply)
    }

  # module.kube-hetzner.module.agents["0-0-agent-small"].hcloud_server_network.server will be created
  + resource "hcloud_server_network" "server" {
      + id          = (known after apply)
      + ip          = "10.0.0.101"
      + mac_address = (known after apply)
      + server_id   = (known after apply)
      + subnet_id   = "3236890-10.0.0.0/16"
    }

  # module.kube-hetzner.module.agents["0-0-agent-small"].null_resource.registries will be created
  + resource "null_resource" "registries" {
      + id       = (known after apply)
      + triggers = {
          + "registries" = <<-EOT
                mirrors:
                      eu.gcr.io:
                        endpoint:
                          - "https://eu.gcr.io"
                    configs:
                      eu.gcr.io:
                        auth:
                          username: _json_key
                          password: '{
                  "type": "service_account",
                  "project_id": "asset-management-ci-cd",
                  "private_key_id": "xxxx",
                  "client_email": "image-puller@asset-management-ci-cd.iam.gserviceaccount.com",
                  "client_id": "102058406430119355136",
                  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
                  "token_uri": "https://oauth2.googleapis.com/token",
                  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
                  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/image-puller%40asset-management-ci-cd.iam.gserviceaccount.com",
                  "universe_domain": "googleapis.com"
                }'
            EOT
        }
    }

  # module.kube-hetzner.module.agents["0-0-agent-small"].null_resource.zram will be created
  + resource "null_resource" "zram" {
      + id       = (known after apply)
      + triggers = {
          + "zram_size" = ""
        }
    }

  # module.kube-hetzner.module.agents["0-0-agent-small"].random_string.identity_file will be created
  + resource "random_string" "identity_file" {
      + id          = (known after apply)
      + length      = 20
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = true
      + numeric     = true
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

  # module.kube-hetzner.module.agents["0-0-agent-small"].random_string.server will be created
  + resource "random_string" "server" {
      + id          = (known after apply)
      + keepers     = {
          + "name" = "h-k3s-test-agent-small"
        }
      + length      = 3
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = false
      + numeric     = false
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

  # module.kube-hetzner.module.agents["0-1-agent-small"].data.cloudinit_config.config will be read during apply
  # (config refers to values not yet known)
 <= data "cloudinit_config" "config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = (known after apply)
          + content_type = "text/cloud-config"
          + filename     = "init.cfg"
        }
    }

  # module.kube-hetzner.module.agents["0-1-agent-small"].hcloud_server.server will be created
  + resource "hcloud_server" "server" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = [
          + 1016130,
        ]
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "143418034"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"     = "h-k3s-test"
          + "engine"      = "k3s"
          + "provisioner" = "terraform"
          + "role"        = "agent_node"
        }
      + location                   = "fsn1"
      + name                       = (known after apply)
      + placement_group_id         = 192472
      + rebuild_protection         = false
      + server_type                = "cpx11"
      + shutdown_before_deletion   = false
      + ssh_keys                   = [
          + "14381223",
        ]
      + status                     = (known after apply)
      + user_data                  = (known after apply)
    }

  # module.kube-hetzner.module.agents["0-1-agent-small"].hcloud_server_network.server will be created
  + resource "hcloud_server_network" "server" {
      + id          = (known after apply)
      + ip          = "10.0.0.102"
      + mac_address = (known after apply)
      + server_id   = (known after apply)
      + subnet_id   = "3236890-10.0.0.0/16"
    }

  # module.kube-hetzner.module.agents["0-1-agent-small"].null_resource.registries will be created
  + resource "null_resource" "registries" {
      + id       = (known after apply)
      + triggers = {
          + "registries" = <<-EOT
                mirrors:
                      eu.gcr.io:
                        endpoint:
                          - "https://eu.gcr.io"
                    configs:
                      eu.gcr.io:
                        auth:
                          username: _json_key
                          password: '{
                  "type": "service_account",
                  "project_id": "asset-management-ci-cd",
                  "private_key_id": "a4ccbc8eddbaea86d207ca85bc6482a288035c6d",
                  "client_email": "image-puller@asset-management-ci-cd.iam.gserviceaccount.com",
                  "client_id": "102058406430119355136",
                  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
                  "token_uri": "https://oauth2.googleapis.com/token",
                  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
                  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/image-puller%40asset-management-ci-cd.iam.gserviceaccount.com",
                  "universe_domain": "googleapis.com"
                }'
            EOT
        }
    }

  # module.kube-hetzner.module.agents["0-1-agent-small"].null_resource.zram will be created
  + resource "null_resource" "zram" {
      + id       = (known after apply)
      + triggers = {
          + "zram_size" = ""
        }
    }

  # module.kube-hetzner.module.agents["0-1-agent-small"].random_string.identity_file will be created
  + resource "random_string" "identity_file" {
      + id          = (known after apply)
      + length      = 20
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = true
      + numeric     = true
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

  # module.kube-hetzner.module.agents["0-1-agent-small"].random_string.server will be created
  + resource "random_string" "server" {
      + id          = (known after apply)
      + keepers     = {
          + "name" = "h-k3s-test-agent-small"
        }
      + length      = 3
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = false
      + numeric     = false
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

  # module.kube-hetzner.module.agents["1-0-agent-large"].data.cloudinit_config.config will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "cloudinit_config" "config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = <<-EOT
                #cloud-config

                debug: True

                write_files:

                # Script to rename the private interface to eth1 and unify NetworkManager connection naming
                - path: /etc/cloud/rename_interface.sh
                  content: |
                    #!/bin/bash
                    set -euo pipefail

                    sleep 11

                    INTERFACE=$(ip link show | awk '/^3:/{print $2}' | sed 's/://g')
                    MAC=$(cat /sys/class/net/$INTERFACE/address)

                    cat <<EOF > /etc/udev/rules.d/70-persistent-net.rules
                    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="$MAC", NAME="eth1"
                    EOF

                    ip link set $INTERFACE down
                    ip link set $INTERFACE name eth1
                    ip link set eth1 up

                    eth0_connection=$(nmcli -g GENERAL.CONNECTION device show eth0)
                    nmcli connection modify "$eth0_connection" \
                      con-name eth0 \
                      connection.interface-name eth0

                    eth1_connection=$(nmcli -g GENERAL.CONNECTION device show eth1)
                    nmcli connection modify "$eth1_connection" \
                      con-name eth1 \
                      connection.interface-name eth1

                    systemctl restart NetworkManager
                  permissions: "0744"

                # Disable ssh password authentication
                - content: |
                    Port 2222
                    PasswordAuthentication no
                    X11Forwarding no
                    MaxAuthTries 2
                    AllowTcpForwarding no
                    AllowAgentForwarding no
                    AuthorizedKeysFile .ssh/authorized_keys
                  path: /etc/ssh/sshd_config.d/kube-hetzner.conf

                # Set reboot method as "kured"
                - content: |
                    REBOOT_METHOD=kured
                  path: /etc/transactional-update.conf

                # Create Rancher repo config
                - content: |
                    [rancher-k3s-common-stable]
                    name=Rancher K3s Common (stable)
                    baseurl=https://rpm.rancher.io/k3s/stable/common/microos/noarch
                    enabled=1
                    gpgcheck=1
                    repo_gpgcheck=0
                    gpgkey=https://rpm.rancher.io/public.key
                  path: /etc/zypp/repos.d/rancher-k3s-common.repo

                # Create the kube_hetzner_selinux.te file, that allows in SELinux to not interfere with various needed services
                - path: /root/kube_hetzner_selinux.te
                  content: |
                    module kube_hetzner_selinux 1.0;

                    require {
                      type kernel_t, bin_t, kernel_generic_helper_t, iscsid_t, iscsid_exec_t, var_run_t,
                      init_t, unlabeled_t, systemd_logind_t, systemd_hostnamed_t, container_t,
                      cert_t, container_var_lib_t, etc_t, usr_t, container_file_t, container_log_t,
                      container_share_t, container_runtime_exec_t, container_runtime_t, var_log_t, proc_t;
                      class key { read view };
                      class file { open read execute execute_no_trans create link lock rename write append setattr unlink getattr watch };
                      class sock_file { watch write create unlink };
                      class unix_dgram_socket create;
                      class unix_stream_socket { connectto read write };
                      class dir { add_name create getattr link lock read rename remove_name reparent rmdir setattr unlink search write watch };
                      class lnk_file { read create };
                      class system module_request;
                      class filesystem associate;
                      class bpf map_create;
                    }

                    #============= kernel_generic_helper_t ==============
                    allow kernel_generic_helper_t bin_t:file execute_no_trans;
                    allow kernel_generic_helper_t kernel_t:key { read view };
                    allow kernel_generic_helper_t self:unix_dgram_socket create;

                    #============= iscsid_t ==============
                    allow iscsid_t iscsid_exec_t:file execute;
                    allow iscsid_t var_run_t:sock_file write;
                    allow iscsid_t var_run_t:unix_stream_socket connectto;

                    #============= init_t ==============
                    allow init_t unlabeled_t:dir { add_name remove_name rmdir };
                    allow init_t unlabeled_t:lnk_file create;
                    allow init_t container_t:file { open read };

                    #============= systemd_logind_t ==============
                    allow systemd_logind_t unlabeled_t:dir search;

                    #============= systemd_hostnamed_t ==============
                    allow systemd_hostnamed_t unlabeled_t:dir search;

                    #============= container_t ==============
                    # Basic file and directory operations for specific types
                    allow container_t cert_t:dir read;
                    allow container_t cert_t:lnk_file read;
                    allow container_t cert_t:file { read open };
                    allow container_t container_var_lib_t:file { create open read write rename lock };
                    allow container_t etc_t:dir { add_name remove_name write create setattr };
                    allow container_t etc_t:sock_file { create unlink };
                    allow container_t usr_t:dir { add_name create getattr link lock read rename remove_name reparent rmdir setattr unlink search write };
                    allow container_t usr_t:file { append create execute getattr link lock read rename setattr unlink write };

                    # Additional rules for container_t
                    allow container_t container_file_t:file { open read write append getattr setattr };
                    allow container_t container_file_t:sock_file watch;
                    allow container_t container_log_t:file { open read write append getattr setattr };
                    allow container_t container_share_t:dir { read write add_name remove_name };
                    allow container_t container_share_t:file { read write create unlink };
                    allow container_t container_runtime_exec_t:file { read execute execute_no_trans open };
                    allow container_t container_runtime_t:unix_stream_socket { connectto read write };
                    allow container_t kernel_t:system module_request;
                    allow container_t container_log_t:dir { read watch };
                    allow container_t container_log_t:file { open read watch };
                    allow container_t container_log_t:lnk_file read;
                    allow container_t var_log_t:dir { add_name write };
                    allow container_t var_log_t:file { create lock open read setattr write };
                    allow container_t var_log_t:dir remove_name;
                    allow container_t var_log_t:file unlink;
                    allow container_t proc_t:filesystem associate;
                    allow container_t self:bpf map_create;

                # Create the k3s registries file if needed

                # Create k3s registries file
                - content:==
                  encoding: base64
                  path: /etc/rancher/k3s/registries.yaml

                # Apply new DNS config

                # Set prepare for manual dns config
                - content: |
                    [main]
                    dns=none
                  path: /etc/NetworkManager/conf.d/dns.conf

                - content: |
                        nameserver 1.1.1.1
                        nameserver 8.8.8.8
                        nameserver 9.9.9.9

                  path: /etc/resolv.conf
                  permissions: '0644'

                # Add ssh authorized keys
                ssh_authorized_keys:

                # Resize /var, not /, as that's the last partition in MicroOS image.
                growpart:
                    devices: ["/var"]

                # Make sure the hostname is set correctly
                hostname: h-k3s-test-agent-large-fsb
                preserve_hostname: true

                runcmd:

                # ensure that /var uses full available disk size, thanks to btrfs this is easy
                - [btrfs, 'filesystem', 'resize', 'max', '/var']

                # SELinux permission for the SSH alternative port

                # SELinux permission for the SSH alternative port.
                - [semanage, port, '-a', '-t', ssh_port_t, '-p', tcp, 2222]

                # Create and apply the necessary SELinux module for kube-hetzner
                - [checkmodule, '-M', '-m', '-o', '/root/kube_hetzner_selinux.mod', '/root/kube_hetzner_selinux.te']
                - ['semodule_package', '-o', '/root/kube_hetzner_selinux.pp', '-m', '/root/kube_hetzner_selinux.mod']
                - [semodule, '-i', '/root/kube_hetzner_selinux.pp']
                - [setsebool, '-P', 'virt_use_samba', '1']
                - [setsebool, '-P', 'domain_kernel_load_modules', '1']

                # Disable rebootmgr service as we use kured instead
                - [systemctl, disable, '--now', 'rebootmgr.service']

                # Set the dns manually
                - [systemctl, 'reload', 'NetworkManager']

                # Bounds the amount of logs that can survive on the system
                - [sed, '-i', 's/#SystemMaxUse=/SystemMaxUse=3G/g', /etc/systemd/journald.conf]
                - [sed, '-i', 's/#MaxRetentionSec=/MaxRetentionSec=1week/g', /etc/systemd/journald.conf]

                # Reduces the default number of snapshots from 2-10 number limit, to 4 and from 4-10 number limit important, to 2
                - [sed, '-i', 's/NUMBER_LIMIT="2-10"/NUMBER_LIMIT="4"/g', /etc/snapper/configs/root]
                - [sed, '-i', 's/NUMBER_LIMIT_IMPORTANT="4-10"/NUMBER_LIMIT_IMPORTANT="3"/g', /etc/snapper/configs/root]

                # Allow network interface
                - [chmod, '+x', '/etc/cloud/rename_interface.sh']

                # Restart the sshd service to apply the new config
                - [systemctl, 'restart', 'sshd']

                # Make sure the network is up
                - [systemctl, restart, NetworkManager]
                - [systemctl, status, NetworkManager]
                - [ip, route, add, default, via, '172.31.1.1', dev, 'eth0']

                # Cleanup some logs
                - [truncate, '-s', '0', '/var/log/audit/audit.log']
            EOT
          + content_type = "text/cloud-config"
          + filename     = "init.cfg"
        }
    }

  # module.kube-hetzner.module.agents["1-1-agent-large"].data.cloudinit_config.config will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "cloudinit_config" "config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = <<-EOT
                #cloud-config

                debug: True

                write_files:

                # Script to rename the private interface to eth1 and unify NetworkManager connection naming
                - path: /etc/cloud/rename_interface.sh
                  content: |
                    #!/bin/bash
                    set -euo pipefail

                    sleep 11

                    INTERFACE=$(ip link show | awk '/^3:/{print $2}' | sed 's/://g')
                    MAC=$(cat /sys/class/net/$INTERFACE/address)

                    cat <<EOF > /etc/udev/rules.d/70-persistent-net.rules
                    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="$MAC", NAME="eth1"
                    EOF

                    ip link set $INTERFACE down
                    ip link set $INTERFACE name eth1
                    ip link set eth1 up

                    eth0_connection=$(nmcli -g GENERAL.CONNECTION device show eth0)
                    nmcli connection modify "$eth0_connection" \
                      con-name eth0 \
                      connection.interface-name eth0

                    eth1_connection=$(nmcli -g GENERAL.CONNECTION device show eth1)
                    nmcli connection modify "$eth1_connection" \
                      con-name eth1 \
                      connection.interface-name eth1

                    systemctl restart NetworkManager
                  permissions: "0744"

                # Disable ssh password authentication
                - content: |
                    Port 2222
                    PasswordAuthentication no
                    X11Forwarding no
                    MaxAuthTries 2
                    AllowTcpForwarding no
                    AllowAgentForwarding no
                    AuthorizedKeysFile .ssh/authorized_keys
                  path: /etc/ssh/sshd_config.d/kube-hetzner.conf

                # Set reboot method as "kured"
                - content: |
                    REBOOT_METHOD=kured
                  path: /etc/transactional-update.conf

                # Create Rancher repo config
                - content: |
                    [rancher-k3s-common-stable]
                    name=Rancher K3s Common (stable)
                    baseurl=https://rpm.rancher.io/k3s/stable/common/microos/noarch
                    enabled=1
                    gpgcheck=1
                    repo_gpgcheck=0
                    gpgkey=https://rpm.rancher.io/public.key
                  path: /etc/zypp/repos.d/rancher-k3s-common.repo

                # Create the kube_hetzner_selinux.te file, that allows in SELinux to not interfere with various needed services
                - path: /root/kube_hetzner_selinux.te
                  content: |
                    module kube_hetzner_selinux 1.0;

                    require {
                      type kernel_t, bin_t, kernel_generic_helper_t, iscsid_t, iscsid_exec_t, var_run_t,
                      init_t, unlabeled_t, systemd_logind_t, systemd_hostnamed_t, container_t,
                      cert_t, container_var_lib_t, etc_t, usr_t, container_file_t, container_log_t,
                      container_share_t, container_runtime_exec_t, container_runtime_t, var_log_t, proc_t;
                      class key { read view };
                      class file { open read execute execute_no_trans create link lock rename write append setattr unlink getattr watch };
                      class sock_file { watch write create unlink };
                      class unix_dgram_socket create;
                      class unix_stream_socket { connectto read write };
                      class dir { add_name create getattr link lock read rename remove_name reparent rmdir setattr unlink search write watch };
                      class lnk_file { read create };
                      class system module_request;
                      class filesystem associate;
                      class bpf map_create;
                    }

                    #============= kernel_generic_helper_t ==============
                    allow kernel_generic_helper_t bin_t:file execute_no_trans;
                    allow kernel_generic_helper_t kernel_t:key { read view };
                    allow kernel_generic_helper_t self:unix_dgram_socket create;

                    #============= iscsid_t ==============
                    allow iscsid_t iscsid_exec_t:file execute;
                    allow iscsid_t var_run_t:sock_file write;
                    allow iscsid_t var_run_t:unix_stream_socket connectto;

                    #============= init_t ==============
                    allow init_t unlabeled_t:dir { add_name remove_name rmdir };
                    allow init_t unlabeled_t:lnk_file create;
                    allow init_t container_t:file { open read };

                    #============= systemd_logind_t ==============
                    allow systemd_logind_t unlabeled_t:dir search;

                    #============= systemd_hostnamed_t ==============
                    allow systemd_hostnamed_t unlabeled_t:dir search;

                    #============= container_t ==============
                    # Basic file and directory operations for specific types
                    allow container_t cert_t:dir read;
                    allow container_t cert_t:lnk_file read;
                    allow container_t cert_t:file { read open };
                    allow container_t container_var_lib_t:file { create open read write rename lock };
                    allow container_t etc_t:dir { add_name remove_name write create setattr };
                    allow container_t etc_t:sock_file { create unlink };
                    allow container_t usr_t:dir { add_name create getattr link lock read rename remove_name reparent rmdir setattr unlink search write };
                    allow container_t usr_t:file { append create execute getattr link lock read rename setattr unlink write };

                    # Additional rules for container_t
                    allow container_t container_file_t:file { open read write append getattr setattr };
                    allow container_t container_file_t:sock_file watch;
                    allow container_t container_log_t:file { open read write append getattr setattr };
                    allow container_t container_share_t:dir { read write add_name remove_name };
                    allow container_t container_share_t:file { read write create unlink };
                    allow container_t container_runtime_exec_t:file { read execute execute_no_trans open };
                    allow container_t container_runtime_t:unix_stream_socket { connectto read write };
                    allow container_t kernel_t:system module_request;
                    allow container_t container_log_t:dir { read watch };
                    allow container_t container_log_t:file { open read watch };
                    allow container_t container_log_t:lnk_file read;
                    allow container_t var_log_t:dir { add_name write };
                    allow container_t var_log_t:file { create lock open read setattr write };
                    allow container_t var_log_t:dir remove_name;
                    allow container_t var_log_t:file unlink;
                    allow container_t proc_t:filesystem associate;
                    allow container_t self:bpf map_create;

                # Create the k3s registries file if needed

                # Create k3s registries file
                - content:==
                  encoding: base64
                  path: /etc/rancher/k3s/registries.yaml

                # Apply new DNS config

                # Set prepare for manual dns config
                - content: |
                    [main]
                    dns=none
                  path: /etc/NetworkManager/conf.d/dns.conf

                - content: |
                        nameserver 1.1.1.1
                        nameserver 8.8.8.8
                        nameserver 9.9.9.9

                  path: /etc/resolv.conf
                  permissions: '0644'

                # Add ssh authorized keys
                ssh_authorized_keys:
                  - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOhVCnngRp4ZdkMIcWlm6JEcNkre7KowKrRPVR/opPRk maxi@Maksims-MacBook-Pro.local

                # Resize /var, not /, as that's the last partition in MicroOS image.
                growpart:
                    devices: ["/var"]

                # Make sure the hostname is set correctly
                hostname: h-k3s-test-agent-large-hsl
                preserve_hostname: true

                runcmd:

                # ensure that /var uses full available disk size, thanks to btrfs this is easy
                - [btrfs, 'filesystem', 'resize', 'max', '/var']

                # SELinux permission for the SSH alternative port

                # SELinux permission for the SSH alternative port.
                - [semanage, port, '-a', '-t', ssh_port_t, '-p', tcp, 2222]

                # Create and apply the necessary SELinux module for kube-hetzner
                - [checkmodule, '-M', '-m', '-o', '/root/kube_hetzner_selinux.mod', '/root/kube_hetzner_selinux.te']
                - ['semodule_package', '-o', '/root/kube_hetzner_selinux.pp', '-m', '/root/kube_hetzner_selinux.mod']
                - [semodule, '-i', '/root/kube_hetzner_selinux.pp']
                - [setsebool, '-P', 'virt_use_samba', '1']
                - [setsebool, '-P', 'domain_kernel_load_modules', '1']

                # Disable rebootmgr service as we use kured instead
                - [systemctl, disable, '--now', 'rebootmgr.service']

                # Set the dns manually
                - [systemctl, 'reload', 'NetworkManager']

                # Bounds the amount of logs that can survive on the system
                - [sed, '-i', 's/#SystemMaxUse=/SystemMaxUse=3G/g', /etc/systemd/journald.conf]
                - [sed, '-i', 's/#MaxRetentionSec=/MaxRetentionSec=1week/g', /etc/systemd/journald.conf]

                # Reduces the default number of snapshots from 2-10 number limit, to 4 and from 4-10 number limit important, to 2
                - [sed, '-i', 's/NUMBER_LIMIT="2-10"/NUMBER_LIMIT="4"/g', /etc/snapper/configs/root]
                - [sed, '-i', 's/NUMBER_LIMIT_IMPORTANT="4-10"/NUMBER_LIMIT_IMPORTANT="3"/g', /etc/snapper/configs/root]

                # Allow network interface
                - [chmod, '+x', '/etc/cloud/rename_interface.sh']

                # Restart the sshd service to apply the new config
                - [systemctl, 'restart', 'sshd']

                # Make sure the network is up
                - [systemctl, restart, NetworkManager]
                - [systemctl, status, NetworkManager]
                - [ip, route, add, default, via, '172.31.1.1', dev, 'eth0']

                # Cleanup some logs
                - [truncate, '-s', '0', '/var/log/audit/audit.log']
            EOT
          + content_type = "text/cloud-config"
          + filename     = "init.cfg"
        }
    }

  # module.kube-hetzner.module.agents["1-2-agent-large"].data.cloudinit_config.config will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "cloudinit_config" "config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = <<-EOT
                #cloud-config

                debug: True

                write_files:

                # Script to rename the private interface to eth1 and unify NetworkManager connection naming
                - path: /etc/cloud/rename_interface.sh
                  content: |
                    #!/bin/bash
                    set -euo pipefail

                    sleep 11

                    INTERFACE=$(ip link show | awk '/^3:/{print $2}' | sed 's/://g')
                    MAC=$(cat /sys/class/net/$INTERFACE/address)

                    cat <<EOF > /etc/udev/rules.d/70-persistent-net.rules
                    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="$MAC", NAME="eth1"
                    EOF

                    ip link set $INTERFACE down
                    ip link set $INTERFACE name eth1
                    ip link set eth1 up

                    eth0_connection=$(nmcli -g GENERAL.CONNECTION device show eth0)
                    nmcli connection modify "$eth0_connection" \
                      con-name eth0 \
                      connection.interface-name eth0

                    eth1_connection=$(nmcli -g GENERAL.CONNECTION device show eth1)
                    nmcli connection modify "$eth1_connection" \
                      con-name eth1 \
                      connection.interface-name eth1

                    systemctl restart NetworkManager
                  permissions: "0744"

                # Disable ssh password authentication
                - content: |
                    Port 2222
                    PasswordAuthentication no
                    X11Forwarding no
                    MaxAuthTries 2
                    AllowTcpForwarding no
                    AllowAgentForwarding no
                    AuthorizedKeysFile .ssh/authorized_keys
                  path: /etc/ssh/sshd_config.d/kube-hetzner.conf

                # Set reboot method as "kured"
                - content: |
                    REBOOT_METHOD=kured
                  path: /etc/transactional-update.conf

                # Create Rancher repo config
                - content: |
                    [rancher-k3s-common-stable]
                    name=Rancher K3s Common (stable)
                    baseurl=https://rpm.rancher.io/k3s/stable/common/microos/noarch
                    enabled=1
                    gpgcheck=1
                    repo_gpgcheck=0
                    gpgkey=https://rpm.rancher.io/public.key
                  path: /etc/zypp/repos.d/rancher-k3s-common.repo

                # Create the kube_hetzner_selinux.te file, that allows in SELinux to not interfere with various needed services
                - path: /root/kube_hetzner_selinux.te
                  content: |
                    module kube_hetzner_selinux 1.0;

                    require {
                      type kernel_t, bin_t, kernel_generic_helper_t, iscsid_t, iscsid_exec_t, var_run_t,
                      init_t, unlabeled_t, systemd_logind_t, systemd_hostnamed_t, container_t,
                      cert_t, container_var_lib_t, etc_t, usr_t, container_file_t, container_log_t,
                      container_share_t, container_runtime_exec_t, container_runtime_t, var_log_t, proc_t;
                      class key { read view };
                      class file { open read execute execute_no_trans create link lock rename write append setattr unlink getattr watch };
                      class sock_file { watch write create unlink };
                      class unix_dgram_socket create;
                      class unix_stream_socket { connectto read write };
                      class dir { add_name create getattr link lock read rename remove_name reparent rmdir setattr unlink search write watch };
                      class lnk_file { read create };
                      class system module_request;
                      class filesystem associate;
                      class bpf map_create;
                    }

                    #============= kernel_generic_helper_t ==============
                    allow kernel_generic_helper_t bin_t:file execute_no_trans;
                    allow kernel_generic_helper_t kernel_t:key { read view };
                    allow kernel_generic_helper_t self:unix_dgram_socket create;

                    #============= iscsid_t ==============
                    allow iscsid_t iscsid_exec_t:file execute;
                    allow iscsid_t var_run_t:sock_file write;
                    allow iscsid_t var_run_t:unix_stream_socket connectto;

                    #============= init_t ==============
                    allow init_t unlabeled_t:dir { add_name remove_name rmdir };
                    allow init_t unlabeled_t:lnk_file create;
                    allow init_t container_t:file { open read };

                    #============= systemd_logind_t ==============
                    allow systemd_logind_t unlabeled_t:dir search;

                    #============= systemd_hostnamed_t ==============
                    allow systemd_hostnamed_t unlabeled_t:dir search;

                    #============= container_t ==============
                    # Basic file and directory operations for specific types
                    allow container_t cert_t:dir read;
                    allow container_t cert_t:lnk_file read;
                    allow container_t cert_t:file { read open };
                    allow container_t container_var_lib_t:file { create open read write rename lock };
                    allow container_t etc_t:dir { add_name remove_name write create setattr };
                    allow container_t etc_t:sock_file { create unlink };
                    allow container_t usr_t:dir { add_name create getattr link lock read rename remove_name reparent rmdir setattr unlink search write };
                    allow container_t usr_t:file { append create execute getattr link lock read rename setattr unlink write };

                    # Additional rules for container_t
                    allow container_t container_file_t:file { open read write append getattr setattr };
                    allow container_t container_file_t:sock_file watch;
                    allow container_t container_log_t:file { open read write append getattr setattr };
                    allow container_t container_share_t:dir { read write add_name remove_name };
                    allow container_t container_share_t:file { read write create unlink };
                    allow container_t container_runtime_exec_t:file { read execute execute_no_trans open };
                    allow container_t container_runtime_t:unix_stream_socket { connectto read write };
                    allow container_t kernel_t:system module_request;
                    allow container_t container_log_t:dir { read watch };
                    allow container_t container_log_t:file { open read watch };
                    allow container_t container_log_t:lnk_file read;
                    allow container_t var_log_t:dir { add_name write };
                    allow container_t var_log_t:file { create lock open read setattr write };
                    allow container_t var_log_t:dir remove_name;
                    allow container_t var_log_t:file unlink;
                    allow container_t proc_t:filesystem associate;
                    allow container_t self:bpf map_create;

                # Create the k3s registries file if needed

                # Create k3s registries file
                  encoding: base64
                  path: /etc/rancher/k3s/registries.yaml

                # Apply new DNS config

                # Set prepare for manual dns config
                - content: |
                    [main]
                    dns=none
                  path: /etc/NetworkManager/conf.d/dns.conf

                - content: |
                        nameserver 1.1.1.1
                        nameserver 8.8.8.8
                        nameserver 9.9.9.9

                  path: /etc/resolv.conf
                  permissions: '0644'

                # Add ssh authorized keys
                ssh_authorized_keys:
                  - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOhVCnngRp4ZdkMIcWlm6JEcNkre7KowKrRPVR/opPRk maxi@Maksims-MacBook-Pro.local

                # Resize /var, not /, as that's the last partition in MicroOS image.
                growpart:
                    devices: ["/var"]

                # Make sure the hostname is set correctly
                hostname: h-k3s-test-agent-large-cwc
                preserve_hostname: true

                runcmd:

                # ensure that /var uses full available disk size, thanks to btrfs this is easy
                - [btrfs, 'filesystem', 'resize', 'max', '/var']

                # SELinux permission for the SSH alternative port

                # SELinux permission for the SSH alternative port.
                - [semanage, port, '-a', '-t', ssh_port_t, '-p', tcp, 2222]

                # Create and apply the necessary SELinux module for kube-hetzner
                - [checkmodule, '-M', '-m', '-o', '/root/kube_hetzner_selinux.mod', '/root/kube_hetzner_selinux.te']
                - ['semodule_package', '-o', '/root/kube_hetzner_selinux.pp', '-m', '/root/kube_hetzner_selinux.mod']
                - [semodule, '-i', '/root/kube_hetzner_selinux.pp']
                - [setsebool, '-P', 'virt_use_samba', '1']
                - [setsebool, '-P', 'domain_kernel_load_modules', '1']

                # Disable rebootmgr service as we use kured instead
                - [systemctl, disable, '--now', 'rebootmgr.service']

                # Set the dns manually
                - [systemctl, 'reload', 'NetworkManager']

                # Bounds the amount of logs that can survive on the system
                - [sed, '-i', 's/#SystemMaxUse=/SystemMaxUse=3G/g', /etc/systemd/journald.conf]
                - [sed, '-i', 's/#MaxRetentionSec=/MaxRetentionSec=1week/g', /etc/systemd/journald.conf]

                # Reduces the default number of snapshots from 2-10 number limit, to 4 and from 4-10 number limit important, to 2
                - [sed, '-i', 's/NUMBER_LIMIT="2-10"/NUMBER_LIMIT="4"/g', /etc/snapper/configs/root]
                - [sed, '-i', 's/NUMBER_LIMIT_IMPORTANT="4-10"/NUMBER_LIMIT_IMPORTANT="3"/g', /etc/snapper/configs/root]

                # Allow network interface
                - [chmod, '+x', '/etc/cloud/rename_interface.sh']

                # Restart the sshd service to apply the new config
                - [systemctl, 'restart', 'sshd']

                # Make sure the network is up
                - [systemctl, restart, NetworkManager]
                - [systemctl, status, NetworkManager]
                - [ip, route, add, default, via, '172.31.1.1', dev, 'eth0']

                # Cleanup some logs
                - [truncate, '-s', '0', '/var/log/audit/audit.log']
            EOT
          + content_type = "text/cloud-config"
          + filename     = "init.cfg"
        }
    }

  # module.kube-hetzner.module.agents["1-3-agent-large"].data.cloudinit_config.config will be read during apply
  # (config refers to values not yet known)
 <= data "cloudinit_config" "config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = (known after apply)
          + content_type = "text/cloud-config"
          + filename     = "init.cfg"
        }
    }

  # module.kube-hetzner.module.agents["1-3-agent-large"].hcloud_server.server will be created
  + resource "hcloud_server" "server" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = [
          + 1016130,
        ]
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "143418034"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"     = "h-k3s-test"
          + "engine"      = "k3s"
          + "provisioner" = "terraform"
          + "role"        = "agent_node"
        }
      + location                   = "fsn1"
      + name                       = (known after apply)
      + placement_group_id         = 192472
      + rebuild_protection         = false
      + server_type                = "ccx23"
      + shutdown_before_deletion   = false
      + ssh_keys                   = [
          + "14381223",
        ]
      + status                     = (known after apply)
      + user_data                  = (known after apply)
    }

  # module.kube-hetzner.module.agents["1-3-agent-large"].hcloud_server_network.server will be created
  + resource "hcloud_server_network" "server" {
      + id          = (known after apply)
      + ip          = "10.1.0.104"
      + mac_address = (known after apply)
      + server_id   = (known after apply)
      + subnet_id   = "3236890-10.1.0.0/16"
    }

  # module.kube-hetzner.module.agents["1-3-agent-large"].null_resource.registries will be created
  + resource "null_resource" "registries" {
      + id       = (known after apply)
      + triggers = {
          + "registries" = <<-EOT
                mirrors:
                      eu.gcr.io:
                        endpoint:
                          - "https://eu.gcr.io"
                    configs:
                      eu.gcr.io:
                        auth:
                          username: _json_key
                          password: '{
                  "type": "service_account",
                  "project_id": "asset-management-ci-cd",
                  "private_key_id": "a4ccbc8eddbaea86d207ca85bc6482a288035c6d",
                  "client_email": "image-puller@asset-management-ci-cd.iam.gserviceaccount.com",
                  "client_id": "102058406430119355136",
                  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
                  "token_uri": "https://oauth2.googleapis.com/token",
                  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
                  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/image-puller%40asset-management-ci-cd.iam.gserviceaccount.com",
                  "universe_domain": "googleapis.com"
                }'
            EOT
        }
    }

  # module.kube-hetzner.module.agents["1-3-agent-large"].null_resource.zram will be created
  + resource "null_resource" "zram" {
      + id       = (known after apply)
      + triggers = {
          + "zram_size" = ""
        }
    }

  # module.kube-hetzner.module.agents["1-3-agent-large"].random_string.identity_file will be created
  + resource "random_string" "identity_file" {
      + id          = (known after apply)
      + length      = 20
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = true
      + numeric     = true
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

  # module.kube-hetzner.module.agents["1-3-agent-large"].random_string.server will be created
  + resource "random_string" "server" {
      + id          = (known after apply)
      + keepers     = {
          + "name" = "h-k3s-test-agent-large"
        }
      + length      = 3
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = false
      + numeric     = false
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

  # module.kube-hetzner.module.agents["1-4-agent-large"].data.cloudinit_config.config will be read during apply
  # (config refers to values not yet known)
 <= data "cloudinit_config" "config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = (known after apply)
          + content_type = "text/cloud-config"
          + filename     = "init.cfg"
        }
    }

  # module.kube-hetzner.module.agents["1-4-agent-large"].hcloud_server.server will be created
  + resource "hcloud_server" "server" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = [
          + 1016130,
        ]
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "143418034"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"     = "h-k3s-test"
          + "engine"      = "k3s"
          + "provisioner" = "terraform"
          + "role"        = "agent_node"
        }
      + location                   = "fsn1"
      + name                       = (known after apply)
      + placement_group_id         = 192472
      + rebuild_protection         = false
      + server_type                = "ccx23"
      + shutdown_before_deletion   = false
      + ssh_keys                   = [
          + "14381223",
        ]
      + status                     = (known after apply)
      + user_data                  = (known after apply)
    }

  # module.kube-hetzner.module.agents["1-4-agent-large"].hcloud_server_network.server will be created
  + resource "hcloud_server_network" "server" {
      + id          = (known after apply)
      + ip          = "10.1.0.105"
      + mac_address = (known after apply)
      + server_id   = (known after apply)
      + subnet_id   = "3236890-10.1.0.0/16"
    }

  # module.kube-hetzner.module.agents["1-4-agent-large"].null_resource.registries will be created
  + resource "null_resource" "registries" {
      + id       = (known after apply)
      + triggers = {
          + "registries" = <<-EOT
                mirrors:
                      eu.gcr.io:
                        endpoint:
                          - "https://eu.gcr.io"
                    configs:
                      eu.gcr.io:
                        auth:
                          username: _json_key
                          password: '{
                  "type": "service_account",
                  "project_id": "asset-management-ci-cd",
                  "private_key_id": "a4ccbc8eddbaea86d207ca85bc6482a288035c6d",
                  "client_email": "image-puller@asset-management-ci-cd.iam.gserviceaccount.com",
                  "client_id": "102058406430119355136",
                  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
                  "token_uri": "https://oauth2.googleapis.com/token",
                  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
                  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/image-puller%40asset-management-ci-cd.iam.gserviceaccount.com",
                  "universe_domain": "googleapis.com"
                }'
            EOT
        }
    }

  # module.kube-hetzner.module.agents["1-4-agent-large"].null_resource.zram will be created
  + resource "null_resource" "zram" {
      + id       = (known after apply)
      + triggers = {
          + "zram_size" = ""
        }
    }

  # module.kube-hetzner.module.agents["1-4-agent-large"].random_string.identity_file will be created
  + resource "random_string" "identity_file" {
      + id          = (known after apply)
      + length      = 20
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = true
      + numeric     = true
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

  # module.kube-hetzner.module.agents["1-4-agent-large"].random_string.server will be created
  + resource "random_string" "server" {
      + id          = (known after apply)
      + keepers     = {
          + "name" = "h-k3s-test-agent-large"
        }
      + length      = 3
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = false
      + numeric     = false
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

  # module.kube-hetzner.module.agents["2-0-bots-large"].data.cloudinit_config.config will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "cloudinit_config" "config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = <<-EOT
                #cloud-config

                debug: True

                write_files:

                # Script to rename the private interface to eth1 and unify NetworkManager connection naming
                - path: /etc/cloud/rename_interface.sh
                  content: |
                    #!/bin/bash
                    set -euo pipefail

                    sleep 11

                    INTERFACE=$(ip link show | awk '/^3:/{print $2}' | sed 's/://g')
                    MAC=$(cat /sys/class/net/$INTERFACE/address)

                    cat <<EOF > /etc/udev/rules.d/70-persistent-net.rules
                    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="$MAC", NAME="eth1"
                    EOF

                    ip link set $INTERFACE down
                    ip link set $INTERFACE name eth1
                    ip link set eth1 up

                    eth0_connection=$(nmcli -g GENERAL.CONNECTION device show eth0)
                    nmcli connection modify "$eth0_connection" \
                      con-name eth0 \
                      connection.interface-name eth0

                    eth1_connection=$(nmcli -g GENERAL.CONNECTION device show eth1)
                    nmcli connection modify "$eth1_connection" \
                      con-name eth1 \
                      connection.interface-name eth1

                    systemctl restart NetworkManager
                  permissions: "0744"

                # Disable ssh password authentication
                - content: |
                    Port 2222
                    PasswordAuthentication no
                    X11Forwarding no
                    MaxAuthTries 2
                    AllowTcpForwarding no
                    AllowAgentForwarding no
                    AuthorizedKeysFile .ssh/authorized_keys
                  path: /etc/ssh/sshd_config.d/kube-hetzner.conf

                # Set reboot method as "kured"
                - content: |
                    REBOOT_METHOD=kured
                  path: /etc/transactional-update.conf

                # Create Rancher repo config
                - content: |
                    [rancher-k3s-common-stable]
                    name=Rancher K3s Common (stable)
                    baseurl=https://rpm.rancher.io/k3s/stable/common/microos/noarch
                    enabled=1
                    gpgcheck=1
                    repo_gpgcheck=0
                    gpgkey=https://rpm.rancher.io/public.key
                  path: /etc/zypp/repos.d/rancher-k3s-common.repo

                # Create the kube_hetzner_selinux.te file, that allows in SELinux to not interfere with various needed services
                - path: /root/kube_hetzner_selinux.te
                  content: |
                    module kube_hetzner_selinux 1.0;

                    require {
                      type kernel_t, bin_t, kernel_generic_helper_t, iscsid_t, iscsid_exec_t, var_run_t,
                      init_t, unlabeled_t, systemd_logind_t, systemd_hostnamed_t, container_t,
                      cert_t, container_var_lib_t, etc_t, usr_t, container_file_t, container_log_t,
                      container_share_t, container_runtime_exec_t, container_runtime_t, var_log_t, proc_t;
                      class key { read view };
                      class file { open read execute execute_no_trans create link lock rename write append setattr unlink getattr watch };
                      class sock_file { watch write create unlink };
                      class unix_dgram_socket create;
                      class unix_stream_socket { connectto read write };
                      class dir { add_name create getattr link lock read rename remove_name reparent rmdir setattr unlink search write watch };
                      class lnk_file { read create };
                      class system module_request;
                      class filesystem associate;
                      class bpf map_create;
                    }

                    #============= kernel_generic_helper_t ==============
                    allow kernel_generic_helper_t bin_t:file execute_no_trans;
                    allow kernel_generic_helper_t kernel_t:key { read view };
                    allow kernel_generic_helper_t self:unix_dgram_socket create;

                    #============= iscsid_t ==============
                    allow iscsid_t iscsid_exec_t:file execute;
                    allow iscsid_t var_run_t:sock_file write;
                    allow iscsid_t var_run_t:unix_stream_socket connectto;

                    #============= init_t ==============
                    allow init_t unlabeled_t:dir { add_name remove_name rmdir };
                    allow init_t unlabeled_t:lnk_file create;
                    allow init_t container_t:file { open read };

                    #============= systemd_logind_t ==============
                    allow systemd_logind_t unlabeled_t:dir search;

                    #============= systemd_hostnamed_t ==============
                    allow systemd_hostnamed_t unlabeled_t:dir search;

                    #============= container_t ==============
                    # Basic file and directory operations for specific types
                    allow container_t cert_t:dir read;
                    allow container_t cert_t:lnk_file read;
                    allow container_t cert_t:file { read open };
                    allow container_t container_var_lib_t:file { create open read write rename lock };
                    allow container_t etc_t:dir { add_name remove_name write create setattr };
                    allow container_t etc_t:sock_file { create unlink };
                    allow container_t usr_t:dir { add_name create getattr link lock read rename remove_name reparent rmdir setattr unlink search write };
                    allow container_t usr_t:file { append create execute getattr link lock read rename setattr unlink write };

                    # Additional rules for container_t
                    allow container_t container_file_t:file { open read write append getattr setattr };
                    allow container_t container_file_t:sock_file watch;
                    allow container_t container_log_t:file { open read write append getattr setattr };
                    allow container_t container_share_t:dir { read write add_name remove_name };
                    allow container_t container_share_t:file { read write create unlink };
                    allow container_t container_runtime_exec_t:file { read execute execute_no_trans open };
                    allow container_t container_runtime_t:unix_stream_socket { connectto read write };
                    allow container_t kernel_t:system module_request;
                    allow container_t container_log_t:dir { read watch };
                    allow container_t container_log_t:file { open read watch };
                    allow container_t container_log_t:lnk_file read;
                    allow container_t var_log_t:dir { add_name write };
                    allow container_t var_log_t:file { create lock open read setattr write };
                    allow container_t var_log_t:dir remove_name;
                    allow container_t var_log_t:file unlink;
                    allow container_t proc_t:filesystem associate;
                    allow container_t self:bpf map_create;

                # Create the k3s registries file if needed

                # Create k3s registries file
                - content:==
                  encoding: base64
                  path: /etc/rancher/k3s/registries.yaml

                # Apply new DNS config

                # Set prepare for manual dns config
                - content: |
                    [main]
                    dns=none
                  path: /etc/NetworkManager/conf.d/dns.conf

                - content: |
                        nameserver 1.1.1.1
                        nameserver 8.8.8.8
                        nameserver 9.9.9.9

                  path: /etc/resolv.conf
                  permissions: '0644'

                # Add ssh authorized keys
                ssh_authorized_keys:
                  - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOhVCnngRp4ZdkMIcWlm6JEcNkre7KowKrRPVR/opPRk maxi@Maksims-MacBook-Pro.local

                # Resize /var, not /, as that's the last partition in MicroOS image.
                growpart:
                    devices: ["/var"]

                # Make sure the hostname is set correctly
                hostname: h-k3s-test-bots-large-pck
                preserve_hostname: true

                runcmd:

                # ensure that /var uses full available disk size, thanks to btrfs this is easy
                - [btrfs, 'filesystem', 'resize', 'max', '/var']

                # SELinux permission for the SSH alternative port

                # SELinux permission for the SSH alternative port.
                - [semanage, port, '-a', '-t', ssh_port_t, '-p', tcp, 2222]

                # Create and apply the necessary SELinux module for kube-hetzner
                - [checkmodule, '-M', '-m', '-o', '/root/kube_hetzner_selinux.mod', '/root/kube_hetzner_selinux.te']
                - ['semodule_package', '-o', '/root/kube_hetzner_selinux.pp', '-m', '/root/kube_hetzner_selinux.mod']
                - [semodule, '-i', '/root/kube_hetzner_selinux.pp']
                - [setsebool, '-P', 'virt_use_samba', '1']
                - [setsebool, '-P', 'domain_kernel_load_modules', '1']

                # Disable rebootmgr service as we use kured instead
                - [systemctl, disable, '--now', 'rebootmgr.service']

                # Set the dns manually
                - [systemctl, 'reload', 'NetworkManager']

                # Bounds the amount of logs that can survive on the system
                - [sed, '-i', 's/#SystemMaxUse=/SystemMaxUse=3G/g', /etc/systemd/journald.conf]
                - [sed, '-i', 's/#MaxRetentionSec=/MaxRetentionSec=1week/g', /etc/systemd/journald.conf]

                # Reduces the default number of snapshots from 2-10 number limit, to 4 and from 4-10 number limit important, to 2
                - [sed, '-i', 's/NUMBER_LIMIT="2-10"/NUMBER_LIMIT="4"/g', /etc/snapper/configs/root]
                - [sed, '-i', 's/NUMBER_LIMIT_IMPORTANT="4-10"/NUMBER_LIMIT_IMPORTANT="3"/g', /etc/snapper/configs/root]

                # Allow network interface
                - [chmod, '+x', '/etc/cloud/rename_interface.sh']

                # Restart the sshd service to apply the new config
                - [systemctl, 'restart', 'sshd']

                # Make sure the network is up
                - [systemctl, restart, NetworkManager]
                - [systemctl, status, NetworkManager]
                - [ip, route, add, default, via, '172.31.1.1', dev, 'eth0']

                # Cleanup some logs
                - [truncate, '-s', '0', '/var/log/audit/audit.log']
            EOT
          + content_type = "text/cloud-config"
          + filename     = "init.cfg"
        }
    }

  # module.kube-hetzner.module.agents["2-1-bots-large"].data.cloudinit_config.config will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "cloudinit_config" "config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = <<-EOT
                #cloud-config

                debug: True

                write_files:

                # Script to rename the private interface to eth1 and unify NetworkManager connection naming
                - path: /etc/cloud/rename_interface.sh
                  content: |
                    #!/bin/bash
                    set -euo pipefail

                    sleep 11

                    INTERFACE=$(ip link show | awk '/^3:/{print $2}' | sed 's/://g')
                    MAC=$(cat /sys/class/net/$INTERFACE/address)

                    cat <<EOF > /etc/udev/rules.d/70-persistent-net.rules
                    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="$MAC", NAME="eth1"
                    EOF

                    ip link set $INTERFACE down
                    ip link set $INTERFACE name eth1
                    ip link set eth1 up

                    eth0_connection=$(nmcli -g GENERAL.CONNECTION device show eth0)
                    nmcli connection modify "$eth0_connection" \
                      con-name eth0 \
                      connection.interface-name eth0

                    eth1_connection=$(nmcli -g GENERAL.CONNECTION device show eth1)
                    nmcli connection modify "$eth1_connection" \
                      con-name eth1 \
                      connection.interface-name eth1

                    systemctl restart NetworkManager
                  permissions: "0744"

                # Disable ssh password authentication
                - content: |
                    Port 2222
                    PasswordAuthentication no
                    X11Forwarding no
                    MaxAuthTries 2
                    AllowTcpForwarding no
                    AllowAgentForwarding no
                    AuthorizedKeysFile .ssh/authorized_keys
                  path: /etc/ssh/sshd_config.d/kube-hetzner.conf

                # Set reboot method as "kured"
                - content: |
                    REBOOT_METHOD=kured
                  path: /etc/transactional-update.conf

                # Create Rancher repo config
                - content: |
                    [rancher-k3s-common-stable]
                    name=Rancher K3s Common (stable)
                    baseurl=https://rpm.rancher.io/k3s/stable/common/microos/noarch
                    enabled=1
                    gpgcheck=1
                    repo_gpgcheck=0
                    gpgkey=https://rpm.rancher.io/public.key
                  path: /etc/zypp/repos.d/rancher-k3s-common.repo

                # Create the kube_hetzner_selinux.te file, that allows in SELinux to not interfere with various needed services
                - path: /root/kube_hetzner_selinux.te
                  content: |
                    module kube_hetzner_selinux 1.0;

                    require {
                      type kernel_t, bin_t, kernel_generic_helper_t, iscsid_t, iscsid_exec_t, var_run_t,
                      init_t, unlabeled_t, systemd_logind_t, systemd_hostnamed_t, container_t,
                      cert_t, container_var_lib_t, etc_t, usr_t, container_file_t, container_log_t,
                      container_share_t, container_runtime_exec_t, container_runtime_t, var_log_t, proc_t;
                      class key { read view };
                      class file { open read execute execute_no_trans create link lock rename write append setattr unlink getattr watch };
                      class sock_file { watch write create unlink };
                      class unix_dgram_socket create;
                      class unix_stream_socket { connectto read write };
                      class dir { add_name create getattr link lock read rename remove_name reparent rmdir setattr unlink search write watch };
                      class lnk_file { read create };
                      class system module_request;
                      class filesystem associate;
                      class bpf map_create;
                    }

                    #============= kernel_generic_helper_t ==============
                    allow kernel_generic_helper_t bin_t:file execute_no_trans;
                    allow kernel_generic_helper_t kernel_t:key { read view };
                    allow kernel_generic_helper_t self:unix_dgram_socket create;

                    #============= iscsid_t ==============
                    allow iscsid_t iscsid_exec_t:file execute;
                    allow iscsid_t var_run_t:sock_file write;
                    allow iscsid_t var_run_t:unix_stream_socket connectto;

                    #============= init_t ==============
                    allow init_t unlabeled_t:dir { add_name remove_name rmdir };
                    allow init_t unlabeled_t:lnk_file create;
                    allow init_t container_t:file { open read };

                    #============= systemd_logind_t ==============
                    allow systemd_logind_t unlabeled_t:dir search;

                    #============= systemd_hostnamed_t ==============
                    allow systemd_hostnamed_t unlabeled_t:dir search;

                    #============= container_t ==============
                    # Basic file and directory operations for specific types
                    allow container_t cert_t:dir read;
                    allow container_t cert_t:lnk_file read;
                    allow container_t cert_t:file { read open };
                    allow container_t container_var_lib_t:file { create open read write rename lock };
                    allow container_t etc_t:dir { add_name remove_name write create setattr };
                    allow container_t etc_t:sock_file { create unlink };
                    allow container_t usr_t:dir { add_name create getattr link lock read rename remove_name reparent rmdir setattr unlink search write };
                    allow container_t usr_t:file { append create execute getattr link lock read rename setattr unlink write };

                    # Additional rules for container_t
                    allow container_t container_file_t:file { open read write append getattr setattr };
                    allow container_t container_file_t:sock_file watch;
                    allow container_t container_log_t:file { open read write append getattr setattr };
                    allow container_t container_share_t:dir { read write add_name remove_name };
                    allow container_t container_share_t:file { read write create unlink };
                    allow container_t container_runtime_exec_t:file { read execute execute_no_trans open };
                    allow container_t container_runtime_t:unix_stream_socket { connectto read write };
                    allow container_t kernel_t:system module_request;
                    allow container_t container_log_t:dir { read watch };
                    allow container_t container_log_t:file { open read watch };
                    allow container_t container_log_t:lnk_file read;
                    allow container_t var_log_t:dir { add_name write };
                    allow container_t var_log_t:file { create lock open read setattr write };
                    allow container_t var_log_t:dir remove_name;
                    allow container_t var_log_t:file unlink;
                    allow container_t proc_t:filesystem associate;
                    allow container_t self:bpf map_create;

                # Create the k3s registries file if needed

                # Create k3s registries file
                - content:==
                  encoding: base64
                  path: /etc/rancher/k3s/registries.yaml

                # Apply new DNS config

                # Set prepare for manual dns config
                - content: |
                    [main]
                    dns=none
                  path: /etc/NetworkManager/conf.d/dns.conf

                - content: |
                        nameserver 1.1.1.1
                        nameserver 8.8.8.8
                        nameserver 9.9.9.9

                  path: /etc/resolv.conf
                  permissions: '0644'

                # Add ssh authorized keys
                ssh_authorized_keys:
                  - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOhVCnngRp4ZdkMIcWlm6JEcNkre7KowKrRPVR/opPRk maxi@Maksims-MacBook-Pro.local

                # Resize /var, not /, as that's the last partition in MicroOS image.
                growpart:
                    devices: ["/var"]

                # Make sure the hostname is set correctly
                hostname: h-k3s-test-bots-large-cos
                preserve_hostname: true

                runcmd:

                # ensure that /var uses full available disk size, thanks to btrfs this is easy
                - [btrfs, 'filesystem', 'resize', 'max', '/var']

                # SELinux permission for the SSH alternative port

                # SELinux permission for the SSH alternative port.
                - [semanage, port, '-a', '-t', ssh_port_t, '-p', tcp, 2222]

                # Create and apply the necessary SELinux module for kube-hetzner
                - [checkmodule, '-M', '-m', '-o', '/root/kube_hetzner_selinux.mod', '/root/kube_hetzner_selinux.te']
                - ['semodule_package', '-o', '/root/kube_hetzner_selinux.pp', '-m', '/root/kube_hetzner_selinux.mod']
                - [semodule, '-i', '/root/kube_hetzner_selinux.pp']
                - [setsebool, '-P', 'virt_use_samba', '1']
                - [setsebool, '-P', 'domain_kernel_load_modules', '1']

                # Disable rebootmgr service as we use kured instead
                - [systemctl, disable, '--now', 'rebootmgr.service']

                # Set the dns manually
                - [systemctl, 'reload', 'NetworkManager']

                # Bounds the amount of logs that can survive on the system
                - [sed, '-i', 's/#SystemMaxUse=/SystemMaxUse=3G/g', /etc/systemd/journald.conf]
                - [sed, '-i', 's/#MaxRetentionSec=/MaxRetentionSec=1week/g', /etc/systemd/journald.conf]

                # Reduces the default number of snapshots from 2-10 number limit, to 4 and from 4-10 number limit important, to 2
                - [sed, '-i', 's/NUMBER_LIMIT="2-10"/NUMBER_LIMIT="4"/g', /etc/snapper/configs/root]
                - [sed, '-i', 's/NUMBER_LIMIT_IMPORTANT="4-10"/NUMBER_LIMIT_IMPORTANT="3"/g', /etc/snapper/configs/root]

                # Allow network interface
                - [chmod, '+x', '/etc/cloud/rename_interface.sh']

                # Restart the sshd service to apply the new config
                - [systemctl, 'restart', 'sshd']

                # Make sure the network is up
                - [systemctl, restart, NetworkManager]
                - [systemctl, status, NetworkManager]
                - [ip, route, add, default, via, '172.31.1.1', dev, 'eth0']

                # Cleanup some logs
                - [truncate, '-s', '0', '/var/log/audit/audit.log']
            EOT
          + content_type = "text/cloud-config"
          + filename     = "init.cfg"
        }
    }

  # module.kube-hetzner.module.agents["2-2-bots-large"].data.cloudinit_config.config will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "cloudinit_config" "config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = <<-EOT
                #cloud-config

                debug: True

                write_files:

                # Script to rename the private interface to eth1 and unify NetworkManager connection naming
                - path: /etc/cloud/rename_interface.sh
                  content: |
                    #!/bin/bash
                    set -euo pipefail

                    sleep 11

                    INTERFACE=$(ip link show | awk '/^3:/{print $2}' | sed 's/://g')
                    MAC=$(cat /sys/class/net/$INTERFACE/address)

                    cat <<EOF > /etc/udev/rules.d/70-persistent-net.rules
                    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="$MAC", NAME="eth1"
                    EOF

                    ip link set $INTERFACE down
                    ip link set $INTERFACE name eth1
                    ip link set eth1 up

                    eth0_connection=$(nmcli -g GENERAL.CONNECTION device show eth0)
                    nmcli connection modify "$eth0_connection" \
                      con-name eth0 \
                      connection.interface-name eth0

                    eth1_connection=$(nmcli -g GENERAL.CONNECTION device show eth1)
                    nmcli connection modify "$eth1_connection" \
                      con-name eth1 \
                      connection.interface-name eth1

                    systemctl restart NetworkManager
                  permissions: "0744"

                # Disable ssh password authentication
                - content: |
                    Port 2222
                    PasswordAuthentication no
                    X11Forwarding no
                    MaxAuthTries 2
                    AllowTcpForwarding no
                    AllowAgentForwarding no
                    AuthorizedKeysFile .ssh/authorized_keys
                  path: /etc/ssh/sshd_config.d/kube-hetzner.conf

                # Set reboot method as "kured"
                - content: |
                    REBOOT_METHOD=kured
                  path: /etc/transactional-update.conf

                # Create Rancher repo config
                - content: |
                    [rancher-k3s-common-stable]
                    name=Rancher K3s Common (stable)
                    baseurl=https://rpm.rancher.io/k3s/stable/common/microos/noarch
                    enabled=1
                    gpgcheck=1
                    repo_gpgcheck=0
                    gpgkey=https://rpm.rancher.io/public.key
                  path: /etc/zypp/repos.d/rancher-k3s-common.repo

                # Create the kube_hetzner_selinux.te file, that allows in SELinux to not interfere with various needed services
                - path: /root/kube_hetzner_selinux.te
                  content: |
                    module kube_hetzner_selinux 1.0;

                    require {
                      type kernel_t, bin_t, kernel_generic_helper_t, iscsid_t, iscsid_exec_t, var_run_t,
                      init_t, unlabeled_t, systemd_logind_t, systemd_hostnamed_t, container_t,
                      cert_t, container_var_lib_t, etc_t, usr_t, container_file_t, container_log_t,
                      container_share_t, container_runtime_exec_t, container_runtime_t, var_log_t, proc_t;
                      class key { read view };
                      class file { open read execute execute_no_trans create link lock rename write append setattr unlink getattr watch };
                      class sock_file { watch write create unlink };
                      class unix_dgram_socket create;
                      class unix_stream_socket { connectto read write };
                      class dir { add_name create getattr link lock read rename remove_name reparent rmdir setattr unlink search write watch };
                      class lnk_file { read create };
                      class system module_request;
                      class filesystem associate;
                      class bpf map_create;
                    }

                    #============= kernel_generic_helper_t ==============
                    allow kernel_generic_helper_t bin_t:file execute_no_trans;
                    allow kernel_generic_helper_t kernel_t:key { read view };
                    allow kernel_generic_helper_t self:unix_dgram_socket create;

                    #============= iscsid_t ==============
                    allow iscsid_t iscsid_exec_t:file execute;
                    allow iscsid_t var_run_t:sock_file write;
                    allow iscsid_t var_run_t:unix_stream_socket connectto;

                    #============= init_t ==============
                    allow init_t unlabeled_t:dir { add_name remove_name rmdir };
                    allow init_t unlabeled_t:lnk_file create;
                    allow init_t container_t:file { open read };

                    #============= systemd_logind_t ==============
                    allow systemd_logind_t unlabeled_t:dir search;

                    #============= systemd_hostnamed_t ==============
                    allow systemd_hostnamed_t unlabeled_t:dir search;

                    #============= container_t ==============
                    # Basic file and directory operations for specific types
                    allow container_t cert_t:dir read;
                    allow container_t cert_t:lnk_file read;
                    allow container_t cert_t:file { read open };
                    allow container_t container_var_lib_t:file { create open read write rename lock };
                    allow container_t etc_t:dir { add_name remove_name write create setattr };
                    allow container_t etc_t:sock_file { create unlink };
                    allow container_t usr_t:dir { add_name create getattr link lock read rename remove_name reparent rmdir setattr unlink search write };
                    allow container_t usr_t:file { append create execute getattr link lock read rename setattr unlink write };

                    # Additional rules for container_t
                    allow container_t container_file_t:file { open read write append getattr setattr };
                    allow container_t container_file_t:sock_file watch;
                    allow container_t container_log_t:file { open read write append getattr setattr };
                    allow container_t container_share_t:dir { read write add_name remove_name };
                    allow container_t container_share_t:file { read write create unlink };
                    allow container_t container_runtime_exec_t:file { read execute execute_no_trans open };
                    allow container_t container_runtime_t:unix_stream_socket { connectto read write };
                    allow container_t kernel_t:system module_request;
                    allow container_t container_log_t:dir { read watch };
                    allow container_t container_log_t:file { open read watch };
                    allow container_t container_log_t:lnk_file read;
                    allow container_t var_log_t:dir { add_name write };
                    allow container_t var_log_t:file { create lock open read setattr write };
                    allow container_t var_log_t:dir remove_name;
                    allow container_t var_log_t:file unlink;
                    allow container_t proc_t:filesystem associate;
                    allow container_t self:bpf map_create;

                # Create the k3s registries file if needed

                # Create k3s registries file
                - content:==
                  encoding: base64
                  path: /etc/rancher/k3s/registries.yaml

                # Apply new DNS config

                # Set prepare for manual dns config
                - content: |
                    [main]
                    dns=none
                  path: /etc/NetworkManager/conf.d/dns.conf

                - content: |
                        nameserver 1.1.1.1
                        nameserver 8.8.8.8
                        nameserver 9.9.9.9

                  path: /etc/resolv.conf
                  permissions: '0644'

                # Add ssh authorized keys
                ssh_authorized_keys:
                  - ssh-ed25519 

                # Resize /var, not /, as that's the last partition in MicroOS image.
                growpart:
                    devices: ["/var"]

                # Make sure the hostname is set correctly
                hostname: h-k3s-test-bots-large-uzx
                preserve_hostname: true

                runcmd:

                # ensure that /var uses full available disk size, thanks to btrfs this is easy
                - [btrfs, 'filesystem', 'resize', 'max', '/var']

                # SELinux permission for the SSH alternative port

                # SELinux permission for the SSH alternative port.
                - [semanage, port, '-a', '-t', ssh_port_t, '-p', tcp, 2222]

                # Create and apply the necessary SELinux module for kube-hetzner
                - [checkmodule, '-M', '-m', '-o', '/root/kube_hetzner_selinux.mod', '/root/kube_hetzner_selinux.te']
                - ['semodule_package', '-o', '/root/kube_hetzner_selinux.pp', '-m', '/root/kube_hetzner_selinux.mod']
                - [semodule, '-i', '/root/kube_hetzner_selinux.pp']
                - [setsebool, '-P', 'virt_use_samba', '1']
                - [setsebool, '-P', 'domain_kernel_load_modules', '1']

                # Disable rebootmgr service as we use kured instead
                - [systemctl, disable, '--now', 'rebootmgr.service']

                # Set the dns manually
                - [systemctl, 'reload', 'NetworkManager']

                # Bounds the amount of logs that can survive on the system
                - [sed, '-i', 's/#SystemMaxUse=/SystemMaxUse=3G/g', /etc/systemd/journald.conf]
                - [sed, '-i', 's/#MaxRetentionSec=/MaxRetentionSec=1week/g', /etc/systemd/journald.conf]

                # Reduces the default number of snapshots from 2-10 number limit, to 4 and from 4-10 number limit important, to 2
                - [sed, '-i', 's/NUMBER_LIMIT="2-10"/NUMBER_LIMIT="4"/g', /etc/snapper/configs/root]
                - [sed, '-i', 's/NUMBER_LIMIT_IMPORTANT="4-10"/NUMBER_LIMIT_IMPORTANT="3"/g', /etc/snapper/configs/root]

                # Allow network interface
                - [chmod, '+x', '/etc/cloud/rename_interface.sh']

                # Restart the sshd service to apply the new config
                - [systemctl, 'restart', 'sshd']

                # Make sure the network is up
                - [systemctl, restart, NetworkManager]
                - [systemctl, status, NetworkManager]
                - [ip, route, add, default, via, '172.31.1.1', dev, 'eth0']

                # Cleanup some logs
                - [truncate, '-s', '0', '/var/log/audit/audit.log']
            EOT
          + content_type = "text/cloud-config"
          + filename     = "init.cfg"
        }
    }

  # module.kube-hetzner.module.agents["2-3-bots-large"].data.cloudinit_config.config will be read during apply
  # (config refers to values not yet known)
 <= data "cloudinit_config" "config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = (known after apply)
          + content_type = "text/cloud-config"
          + filename     = "init.cfg"
        }
    }

  # module.kube-hetzner.module.agents["2-3-bots-large"].hcloud_server.server will be created
  + resource "hcloud_server" "server" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = [
          + 1016130,
        ]
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "143418034"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"     = "h-k3s-test"
          + "engine"      = "k3s"
          + "provisioner" = "terraform"
          + "role"        = "agent_node"
        }
      + location                   = "fsn1"
      + name                       = (known after apply)
      + placement_group_id         = 192472
      + rebuild_protection         = false
      + server_type                = "ccx23"
      + shutdown_before_deletion   = false
      + ssh_keys                   = [
          + "14381223",
        ]
      + status                     = (known after apply)
      + user_data                  = (known after apply)
    }

  # module.kube-hetzner.module.agents["2-3-bots-large"].hcloud_server_network.server will be created
  + resource "hcloud_server_network" "server" {
      + id          = (known after apply)
      + ip          = "10.2.0.104"
      + mac_address = (known after apply)
      + server_id   = (known after apply)
      + subnet_id   = "3236890-10.2.0.0/16"
    }

  # module.kube-hetzner.module.agents["2-3-bots-large"].null_resource.registries will be created
  + resource "null_resource" "registries" {
      + id       = (known after apply)
      + triggers = {
          + "registries" = <<-EOT
                mirrors:
                      eu.gcr.io:
                        endpoint:
                          - "https://eu.gcr.io"
                    configs:
                      eu.gcr.io:
                        auth:
                          username: _json_key
                          password: '{
                  "type": "service_account",
                  "project_id": "asset-management-ci-cd",
                  "private_key_id": "a4ccbc8eddbaea86d207ca85bc6482a288035c6d",
                  "private_key": "-----BEGIN PRIVATE KEY-----\n-----END PRIVATE KEY-----\n",
                  "client_email": "image-puller@asset-management-ci-cd.iam.gserviceaccount.com",
                  "client_id": "102058406430119355136",
                  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
                  "token_uri": "https://oauth2.googleapis.com/token",
                  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
                  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/image-puller%40asset-management-ci-cd.iam.gserviceaccount.com",
                  "universe_domain": "googleapis.com"
                }'
            EOT
        }
    }

  # module.kube-hetzner.module.agents["2-3-bots-large"].null_resource.zram will be created
  + resource "null_resource" "zram" {
      + id       = (known after apply)
      + triggers = {
          + "zram_size" = ""
        }
    }

  # module.kube-hetzner.module.agents["2-3-bots-large"].random_string.identity_file will be created
  + resource "random_string" "identity_file" {
      + id          = (known after apply)
      + length      = 20
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = true
      + numeric     = true
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

  # module.kube-hetzner.module.agents["2-3-bots-large"].random_string.server will be created
  + resource "random_string" "server" {
      + id          = (known after apply)
      + keepers     = {
          + "name" = "h-k3s-test-bots-large"
        }
      + length      = 3
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = false
      + numeric     = false
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

  # module.kube-hetzner.module.agents["2-4-bots-large"].data.cloudinit_config.config will be read during apply
  # (config refers to values not yet known)
 <= data "cloudinit_config" "config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = (known after apply)
          + content_type = "text/cloud-config"
          + filename     = "init.cfg"
        }
    }

  # module.kube-hetzner.module.agents["2-4-bots-large"].hcloud_server.server will be created
  + resource "hcloud_server" "server" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = [
          + 1016130,
        ]
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "143418034"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"     = "h-k3s-test"
          + "engine"      = "k3s"
          + "provisioner" = "terraform"
          + "role"        = "agent_node"
        }
      + location                   = "fsn1"
      + name                       = (known after apply)
      + placement_group_id         = 192472
      + rebuild_protection         = false
      + server_type                = "ccx23"
      + shutdown_before_deletion   = false
      + ssh_keys                   = [
          + "14381223",
        ]
      + status                     = (known after apply)
      + user_data                  = (known after apply)
    }

  # module.kube-hetzner.module.agents["2-4-bots-large"].hcloud_server_network.server will be created
  + resource "hcloud_server_network" "server" {
      + id          = (known after apply)
      + ip          = "10.2.0.105"
      + mac_address = (known after apply)
      + server_id   = (known after apply)
      + subnet_id   = "3236890-10.2.0.0/16"
    }

  # module.kube-hetzner.module.agents["2-4-bots-large"].null_resource.registries will be created
  + resource "null_resource" "registries" {
      + id       = (known after apply)
      + triggers = {
          + "registries" = <<-EOT
                mirrors:
                      eu.gcr.io:
                        endpoint:
                          - "https://eu.gcr.io"
                    configs:
                      eu.gcr.io:
                        auth:
                          username: _json_key
                          password: '{
                  "type": "service_account",
                  "project_id": "asset-management-ci-cd",
                  "private_key_id": "a4ccbc8eddbaea86d207ca85bc6482a288035c6d",
                  "private_key": "-----BEGIN PRIVATE KEY-----\n-----END PRIVATE KEY-----\n",
                  "client_email": "image-puller@asset-management-ci-cd.iam.gserviceaccount.com",
                  "client_id": "102058406430119355136",
                  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
                  "token_uri": "https://oauth2.googleapis.com/token",
                  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
                  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/image-puller%40asset-management-ci-cd.iam.gserviceaccount.com",
                  "universe_domain": "googleapis.com"
                }'
            EOT
        }
    }

  # module.kube-hetzner.module.agents["2-4-bots-large"].null_resource.zram will be created
  + resource "null_resource" "zram" {
      + id       = (known after apply)
      + triggers = {
          + "zram_size" = ""
        }
    }

  # module.kube-hetzner.module.agents["2-4-bots-large"].random_string.identity_file will be created
  + resource "random_string" "identity_file" {
      + id          = (known after apply)
      + length      = 20
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = true
      + numeric     = true
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

  # module.kube-hetzner.module.agents["2-4-bots-large"].random_string.server will be created
  + resource "random_string" "server" {
      + id          = (known after apply)
      + keepers     = {
          + "name" = "h-k3s-test-bots-large"
        }
      + length      = 3
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = false
      + numeric     = false
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

Plan: 49 to add, 0 to change, 0 to destroy.
CroutonDigital commented 10 months ago

I try again, but same issue with: placement group 211529 contains already 10 servers (service_error)

mysticaltech commented 10 months ago

@CroutonDigital Thank you for trying. @mnencia Cornered the issue, I will work on a fix ASAP. Keep you and @maximen39 posted, give me 48h tops.

CroutonDigital commented 10 months ago

@mysticaltech when I can try test your fix?

mysticaltech commented 10 months ago

@CroutonDigital I will see if I can finish this weekend, sorry for the delay 🤞

CroutonDigital commented 10 months ago

Hi, @mysticaltech

Today I try fix on branch fix/placement-group-logic file: locals.tf lines: 159, 160

agent_nodes_indices         = { for node_name, node_details in local.agent_nodes : node_name => ceil(node_details.index / 10) }
control_plane_nodes_indices = { for node_name, node_details in local.control_plane_nodes : node_name => ceil(node_details.index / 10) }

I change round function from floor to ceil and apply, now I have 2 placement group with distributed servers.

Screenshot 2024-01-16 at 14 38 56

CroutonDigital commented 10 months ago

Not correct fix, cause random generate index

mysticaltech commented 10 months ago

@CroutonDigital Yes. Please if you want to try to fix is, go to the PR and look for @ mnencia explanations. He cornered the issue.

Sorry for the delay on my part, did not find the time, if you push a PR, please point it to the open PR branch. Otherwise I will try to address the issue this week.

valkenburg-prevue-ch commented 9 months ago

Hey all, I'm coming here though this comment https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/pull/1185#issuecomment-1908443148 .

I've been reading this issue (and the close PR1161), and if I understand correctly, the issue is this:

Would it make sense to scale the number of placement groups by the number of nodepools? Let each nodepool have their own placement groups, where number of placement groups == ceil(number of nodes in the pool / 10).

I don't know the limit on the number of placement groups, and if this is a bad idea from a cluster design perspective.

valkenburg-prevue-ch commented 9 months ago

OK, found the limits: https://docs.hetzner.com/cloud/placement-groups/overview#limits 50 placement groups per project, 10 nodes per placement group. So what I write above could work within those limits.

mysticaltech commented 9 months ago

@CroutonDigital As the old saying goes, better late than never. Thanks for @valkenburg-prevue-ch for his invaluable help, there is now a way to make it work for you, please see the new placement group customization options in kube.tf.example. This has been released in v2.12.

Please let us know!

mysticaltech commented 3 months ago

@CroutonDigital GitHub security scan detected a possible google key leak above, I edited out the value I thought was sensitive, but please check just in case, and if it was really a leak, you may want to revoke the credential.