kube-hetzner / terraform-hcloud-kube-hetzner

Optimized and Maintenance-free Kubernetes on Hetzner Cloud in one command!
MIT License
2.32k stars 354 forks source link

Calico is broken #574

Closed donydonald1 closed 1 year ago

donydonald1 commented 1 year ago

four days ago eveything was working fine until today... Not sure why i am getting this error.

module.kube-hetzner.null_resource.kustomization (remote-exec): error: accumulating resources: accumulation err='accumulating resources from 'https://projectcalico.docs.tigera.io/manifests/calico.yaml': URL is a git repository': no 'git' program on path: exec: "git": executable file not found in $PATH
╷
│ Error: remote-exec provisioner error
│ 
│   with module.kube-hetzner.null_resource.kustomization,
│   on .terraform/modules/kube-hetzner/init.tf line 247, in resource "null_resource" "kustomization":
│  247:   provisioner "remote-exec" {
│ 
│ error executing "/tmp/terraform_966719265.sh": Process exited with status 1
aleksasiriski commented 1 year ago

no 'git' program on path: exec: "git": executable file not found in $PATH

Do you have git installed?

donydonald1 commented 1 year ago

This is my first time seeing this issue among other issues. doesn't the nodes suppose to have git installed or doesn't the module ensure it's installed?. This error now comes when using calico as CNI. Cilium and Flannel work just fine with successful apply.

aleksasiriski commented 1 year ago

I thought it was related to your OS but if other cni's work than that's not the case. Calico is out of my scope so someone else will have to help you. In the meantime, please send your kube.tf without sensitive info.

@mysticaltech

donydonald1 commented 1 year ago
module "kube-hetzner" {
  providers = {
    hcloud = hcloud
  }
  hcloud_token = var.hcloud_token

  # * For local dev, path to the git repo
  # source = "../../kube-hetzner/"
  # If you want to use the latest master branch
  # source = "github.com/kube-hetzner/terraform-hcloud-kube-hetzner"
  # For normal use, this is the path to the terraform registry
  source  = "kube-hetzner/kube-hetzner/hcloud"
  version = "1.9.0"

  # Note that some values, notably "location" and "public_key" have no effect after initializing the cluster.
  # This is to keep Terraform from re-provisioning all nodes at once, which would lose data. If you want to update
  # those, you should instead change the value here and manually re-provision each node. Grep for "lifecycle".

  # Customize the SSH port (by default 22)
  # ssh_port = 2222

  # * Your ssh public key
  ssh_public_key = file("~/.ssh/id_rsa.pub")
  # * Your private key must be "ssh_private_key = null" when you want to use ssh-agent for a Yubikey-like device authentification or an SSH key-pair with a passphrase.
  # For more details on SSH see https://github.com/kube-hetzner/kube-hetzner/blob/master/docs/ssh.md
  ssh_private_key = file("~/.ssh/id_rsa")
  # You can add additional SSH public Keys to grant other team members root access to your cluster nodes.
  # ssh_additional_public_keys = []

  # You can also add additional SSH public Keys which are saved in the hetzner cloud by a label.
  # See https://docs.hetzner.cloud/#label-selector
  # ssh_hcloud_key_label = "role=admin"

  # If you want to use an ssh key that is already registered within hetzner cloud, you can pass its id.
  # If no id is passed, a new ssh key will be registered within hetzner cloud.
  # It is important that exactly this key is passed via `ssh_public_key` & `ssh_private_key` vars.
  # hcloud_ssh_key_id = ""

  # These can be customized, or left with the default values
  # * For Hetzner locations see https://docs.hetzner.com/general/others/data-centers-and-connection/
  network_region = var.network_region # change to `us-east` if location is ash

  control_plane_nodepools = [
    {
      name        = "control-plane-ash",
      server_type = var.control_plane_server_type,
      location    = var.location,
      labels      = [],
      taints      = [],
      count       = var.control_plane_server_count,
    }
  ]

  agent_nodepools = [
    {
      name        = "storage",
      server_type = var.node_type,
      location    = var.location,
      # Fully optional, just a demo.
      labels = [
        "node.kubernetes.io/server-usage=storage"
      ],
      taints               = [],
      count                = var.node_count
      longhorn_volume_size = 120
    }
  ]
  # Add custom control plane configuration options here.
  # E.g to enable monitoring for etcd, proxy etc:
  /* control_planes_custom_config = {
    etcd-expose-metrics = true,
    kube-controller-manager-arg = "bind-address=0.0.0.0",
    kube-proxy-arg ="metrics-bind-address=0.0.0.0",
    kube-scheduler-arg = "bind-address=0.0.0.0",
   } */

  enable_wireguard = false
  load_balancer_type     = "lb11"
  load_balancer_location = var.location
  base_domain = "techsecom-dev-k8s.techsecom.tech"
  autoscaler_nodepools = [
    {
      name        = "autoscaler"
      server_type = var.node_type # must be same or better than the control_plane server type (regarding disk size)!
      location    = var.location
      min_nodes   = 1
      max_nodes   = 5
    }
  ]
  etcd_s3_backup = {
    etcd-s3-endpoint   = var.cloudflare_etcd-s3-endpoint
    etcd-s3-access-key = var.cloudflare_etcd-s3-access-key
    etcd-s3-secret-key = var.cloudflare_etcd-s3-secret-key
    etcd-s3-bucket     = var.cloudflare_etcd-s3-bucket
  }

  # To use local storage on the nodes, you can enable Longhorn, default is "false".
  # See a full recap on how to configure agent nodepools for longhorn here https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/discussions/373#discussioncomment-3983159
  enable_longhorn = true

  # By default, longhorn is pulled from https://charts.longhorn.io.
  # If you need a version of longhorn which assures compatibility with rancher you can set this variable to https://charts.rancher.io. 
  # longhorn_repository = "https://charts.rancher.io"

  # The namespace for longhorn deployment, default is "longhorn-system".
  longhorn_namespace = "longhorn-system"

  # The file system type for Longhorn, if enabled (ext4 is the default, otherwise you can choose xfs).
  # longhorn_fstype = "xfs"

  longhorn_replica_count = 3

  disable_hetzner_csi = true

  ingress_controller    = "nginx"
  ingress_replica_count = 3

  # Use the klipperLB (similar to metalLB), instead of the default Hetzner one, that has an advantage of dropping the cost of the setup.
  # Automatically "true" in the case of single node cluster (as it does not make sense to use the Hetzner LB in that situation).
  # It can work with any ingress controller that you choose to deploy.
  # Please note that because the klipperLB points to all nodes, we automatically allow scheduling on the control plane when it is active.
  # enable_klipper_metal_lb = "true"

  # If you want to configure additional arguments for traefik, enter them here as a list and in the form of traefik CLI arguments; see https://doc.traefik.io/traefik/reference/static-configuration/cli/
  # They are the options that go into the additionalArguments section of the Traefik helm values file.
  # Example: traefik_additional_options = ["--log.level=DEBUG", "--tracing=true"]
  # traefik_additional_options = []

  # By default traefik is configured to redirect http traffic to https, you can set this to "false" to disable the redirection.
  # traefik_redirect_to_https = false

  # If you want to disable the metric server set this to "false". Default is "true".
  # enable_metrics_server = false
  placement_group_disable = true
  # If you want to allow non-control-plane workloads to run on the control-plane nodes, set this to "true". The default is "false".
  # True by default for single node clusters, and when enable_klipper_metal_lb is true. In those cases, the value below will be ignored.
  # allow_scheduling_on_control_plane = true

  automatically_upgrade_k3s = false
  automatically_upgrade_os  = true
  kured_options = {
    "reboot-days" : "su"
    "start-time" : "3am"
    "end-time" : "8am"
  }

  initial_k3s_channel = "v1.24"

  # The cluster name, by default "k3s"
  cluster_name = var.cluster_name

  # Whether to use the cluster name in the node name, in the form of {cluster_name}-{nodepool_name}, the default is "true".
  # use_cluster_name_in_node_name = false

  # Extra k3s registries. This is useful if you have private registries and you want to pull images without additional secrets.
  # Or if you want to proxy registries for various reasons like rate-limiting.
  # It will create the registries.yaml file, more info here https://docs.k3s.io/installation/private-registry.
  # Note that you do not need to get this right from the first time, you can update it when you want during the life of your cluster.
  # The default is blank.
  /* k3s_registries = <<-EOT
    mirrors:
      hub.my_registry.com:
        endpoint:
          - "hub.my_registry.com"
    configs:
      hub.my_registry.com:
        auth:
          username: username
          password: password
  EOT */

  # Additional environment variables for the host OS on which k3s runs. See for example https://docs.k3s.io/advanced#configuring-an-http-proxy . 
  # additional_k3s_environment = {
  #   "CONTAINERD_HTTP_PROXY" : "http://your.proxy:port",
  #   "CONTAINERD_HTTPS_PROXY" : "http://your.proxy:port",
  #   "NO_PROXY" : "127.0.0.0/8,10.0.0.0/8,",
  # }

  # Additional commands to execute on the host OS before the k3s install, for example fetching and installing certs.
  # preinstall_exec = [
  #   "curl https://somewhere.over.the.rainbow/ca.crt > /root/ca.crt",
  #   "trust anchor --store /root/ca.crt",
  # ]

  # If you want to allow all outbound traffic you can set this to "false". Default is "true".
  restrict_outbound_traffic = false

  # Adding extra firewall rules, like opening a port
  # More info on the format here https://registry.terraform.io/providers/hetznercloud/hcloud/latest/docs/resources/firewall
  /* extra_firewall_rules = [
  #   # For Postgres
     {
       direction       = "in"
       protocol        = "tcp"
       port            = "5432"
       source_ips      = ["0.0.0.0/0", "::/0"]
       destination_ips = [] # Won't be used for this rule
     },
  #   # To Allow ArgoCD access to resources via SSH
     {
       direction       = "out"
       protocol        = "tcp"
       port            = "22"
       source_ips      = [] # Won't be used for this rule
       destination_ips = ["0.0.0.0/0", "::/0"]
     }
   ] */

  # If you want to configure a different CNI for k3s, use this flag
  # possible values: flannel (Default), calico, and cilium
  # As for Cilium, we allow infinite configurations via helm values, please check the CNI section of the readme over at https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/#cni.
  # Also, see the cilium_values at towards the end of this file, in the advanced section.
  cni_plugin = "calico"

  # If you want to disable the k3s default network policy controller, use this flag!
  # Both Calico and Ciliun cni_plugin values override this value to true automatically, the default is "false".
  # disable_network_policy = true

  # If you want to disable the automatic use of placement group "spread". See https://docs.hetzner.com/cloud/placement-groups/overview/
  # That may be useful if you need to deploy more than 500 nodes! The default is "false".
  # placement_group_disable = true

  # By default, we allow ICMP ping in to the nodes, to check for liveness for instance. If you do not want to allow that, you can. Just set this flag to true (false by default).
  # block_icmp_ping_in = true

  # You can enable cert-manager (installed by Helm behind the scenes) with the following flag, the default is "true".
  enable_cert_manager = true

  # We download OpenSUSE MicroOS from a mirror. In case it somehow does not work for you (you get a 403), you can try other mirrors.
  # You can find a working mirror at https://download.opensuse.org/tumbleweed/appliances/openSUSE-MicroOS.x86_64-OpenStack-Cloud.qcow2.mirrorlist,
  opensuse_microos_mirror_link = "https://provo-mirror.opensuse.org/tumbleweed/appliances/openSUSE-MicroOS.x86_64-16.0.0-OpenStack-Cloud-Snapshot20230206.qcow2"

  # IP Addresses to use for the DNS Servers, set to an empty list to use the ones provided by Hetzner, defaults to ["1.1.1.1", " 1.0.0.1", "8.8.8.8"].
  # For rancher installs, best to leave it as default.
  # dns_servers = []

  # When this is enabled, rather than the first node, all external traffic will be routed via a control-plane loadbalancer, allowing for high availability.
  # The default is false.
  use_control_plane_lb = true

  # Let's say you are not using the control plane LB solution above, and still want to have one hostname point to all your control-plane nodes.
  # You could create multiple A records of to let's say cp.cluster.my.org pointing to all of your control-plane nodes ips.
  # In which case, you need to define that hostname in the k3s TLS-SANs config to allow connection through it. It can be hostnames or IP addresses.
  # additional_tls_sans = ["cp.cluster.my.org"]

  # Oftentimes, you need to communicate to the cluster from inside the cluster itself, in which case it is important to set this value, as it will configure the hostname
  # at the load balancer level, and will save you from many slows downs when initiating communications from inside. Later on, you can point your DNS to the IP given
  # to the LB. And if you have other services pointing to it, you are also free to create CNAMES to point to it, or whatever you see fit.
  # If set, it will apply to either ingress controllers, Traefik or Ingress-Nginx.
  # lb_hostname = ""

  # You can enable Rancher (installed by Helm behind the scenes) with the following flag, the default is "false".
  # When Rancher is enabled, it automatically installs cert-manager too, and it uses rancher's own self-signed certificates.
  # See for options https://rancher.com/docs/rancher/v2.0-v2.4/en/installation/resources/advanced/helm2/helm-rancher/#choose-your-ssl-configuration
  # The easiest thing is to leave everything as is (using the default rancher self-signed certificate) and put Cloudflare in front of it.
  # As for the number of replicas, by default it is set to the numbe of control plane nodes.
  # You can customized all of the above by adding a rancher_values variable see at the end of this file in the advanced section.
  # After the cluster is deployed, you can always use HelmChartConfig definition to tweak the configuration.
  # IMPORTANT: Rancher's install is quite memory intensive, you will require at least 4GB if RAM, meaning cx21 server type (for your control plane).
  # ALSO, in order for Rancher to successfully deploy, you have to set the "rancher_hostname".
  enable_rancher = true

  # If using Rancher you can set the Rancher hostname, it must be unique hostname even if you do not use it.
  # If not pointing the DNS, you can just port-forward locally via kubectl to get access to the dashboard.
  # If you already set the lb_hostname above and are using a Hetzner LB, you do not need to set this one, as it will be used by default.
  # But if you set this one explicitly, it will have preference over the lb_hostname in rancher settings.
  rancher_hostname = ""

  # When Rancher is deployed, by default is uses the "latest" channel. But this can be customized.
  # The allowed values are "stable" or "latest".
  rancher_install_channel = "latest"

  # Finally, you can specify a bootstrap-password for your rancher instance. Minimum 48 characters long!
  # If you leave empty, one will be generated for you.
  # (Can be used by another rancher2 provider to continue setup of rancher outside this module.)
  rancher_bootstrap_password = ""

  # Separate from the above Rancher config (only use one or the other). You can import this cluster directly on an
  # an already active Rancher install. By clicking "import cluster" choosing "generic", giving it a name and pasting
  # the cluster registration url below. However, you can also ignore that and apply the url via kubectl as instructed
  # by Rancher in the wizard, and that would register your cluster too.
  # More information about the registration can be found here https://rancher.com/docs/rancher/v2.6/en/cluster-provisioning/registered-clusters/
  # rancher_registration_manifest_url = "https://rancher.xyz.dev/v3/import/xxxxxxxxxxxxxxxxxxYYYYYYYYYYYYYYYYYYYzzzzzzzzzzzzzzzzzzzzz.yaml"

  # Extra values that will be passed to the `extra-manifests/kustomization.yaml.tpl` if its present.
  extra_kustomize_parameters = {}

  # It is best practice to turn this off, but for backwards compatibility it is set to "true" by default.
  # See https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/issues/349
  # When "false". The kubeconfig file can instead be created by executing: "terraform output --raw kubeconfig > cluster_kubeconfig.yaml"
  # Always be careful to not commit this file!
  # create_kubeconfig = false

  # Don't create the kustomize backup. This can be helpful for automation.
  # create_kustomization = false

  ### ADVANCED - Custom helm values for packages above (search _values if you want to located where those are mentioned upper in this file)
  # ⚠️ Inside the _values variable below are examples, up to you to find out the best helm values possible, we do not provide support for customized helm values.
  # Please understand that the indentation is very important, inside the EOTs, as those are proper yaml helm values.
  # We advise you to use the default values, and only change them if you know what you are doing!

  # Cilium, all Cilium helm values can be found at https://github.com/cilium/cilium/blob/master/install/kubernetes/cilium/values.yaml
  # The following is an example, please note that the current indentation inside the EOT is important.
  /*   cilium_values = <<EOT
ipam:
  mode: kubernetes
devices: "eth1"
k8s:
  requireIPv4PodCIDR: true
kubeProxyReplacement: strict
l7Proxy: false
encryption:
  enabled: true
  type: wireguard
  EOT */

  # Cert manager, all cert-manager helm values can be found at https://github.com/cert-manager/cert-manager/blob/master/deploy/charts/cert-manager/values.yaml
  # The following is an example, please note that the current indentation inside the EOT is important.
  cert_manager_values = <<EOT
installCRDs: true
namespace: cert-manager
serviceAccount:
  create: true
  automountServiceAccountToken: true
replicaCount: 3
webhook:
  replicaCount: 3
cainjector:
  replicaCount: 3
EOT

#   # Longhorn, all Longhorn helm values can be found at https://github.com/longhorn/longhorn/blob/master/chart/values.yaml
#   # The following is an example, please note that the current indentation inside the EOT is important.
  longhorn_values = <<EOT
defaultSettings:
  defaultDataPath: /var/longhorn
persistence:
  defaultFsType: ext4
  defaultClassReplicaCount: 3
  defaultClass: true
ingress:
  enabled: true
  host: ""
  tls: true
  tlsSecret: longhorn-ingress-tls
  path: /
  kubernetes.io/tls-acme: true
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: true
EOT

  # Traefik, all Traefik helm values can be found at https://github.com/traefik/traefik-helm-chart/blob/master/traefik/values.yaml
  # The following is an example, please note that the current indentation inside the EOT is important.
   traefik_values = <<EOT
deployment:
  replicas: 1
globalArguments: []
service:
  enabled: true
  type: LoadBalancer
  annotations:
    "load-balancer.hetzner.cloud/name": "k3s"
    "load-balancer.hetzner.cloud/use-private-ip": "true"
    "load-balancer.hetzner.cloud/disable-private-ingress": "true"
    "load-balancer.hetzner.cloud/location": "nbg1"
    "load-balancer.hetzner.cloud/type": "lb11"
    "load-balancer.hetzner.cloud/uses-proxyprotocol": "true"

ports:
  web:
    redirectTo: websecure

    proxyProtocol:
      trustedIPs:
        - 127.0.0.1/32
        - 10.0.0.0/8
    forwardedHeaders:
      trustedIPs:
        - 127.0.0.1/32
        - 10.0.0.0/8
  websecure:
    proxyProtocol:
      trustedIPs:
        - 127.0.0.1/32
        - 10.0.0.0/8
    forwardedHeaders:
      trustedIPs:
        - 127.0.0.1/32
        - 10.0.0.0/8
  EOT 

  # Nginx, all Nginx helm values can be found at https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml
  # You can also have a look at https://kubernetes.github.io/ingress-nginx/, to understand how it works, and all the options at your disposal.
  # The following is an example, please note that the current indentation inside the EOT is important.
  nginx_values = <<EOT
controller:
  watchIngressWithoutClass: "true"
  kind: "DaemonSet"
  config:
    "use-forwarded-headers": "true"
    "compute-full-forwarded-for": "true"
    "use-proxy-protocol": "true"
  service:
    annotations:
      "load-balancer.hetzner.cloud/name": "k3s"
      "load-balancer.hetzner.cloud/use-private-ip": "true"
      "load-balancer.hetzner.cloud/disable-private-ingress": "true"
      "load-balancer.hetzner.cloud/location": "ash"
      "load-balancer.hetzner.cloud/type": "lb11"
      "load-balancer.hetzner.cloud/uses-proxyprotocol": "true"
  EOT

  # Rancher, all Rancher helm values can be found at https://rancher.com/docs/rancher/v2.5/en/installation/install-rancher-on-k8s/chart-options/
  # The following is an example, please note that the current indentation inside the EOT is important.
  rancher_values = <<EOT
ingress:
  enabled: true
  tls:
    source: rancher
    secretName: tls-rancher-ingress
hostname: ""
letsEncrypt:
  # email: none@example.com
  environment: production
  ingress:
    class: nginx
privateCA: false
replicas: 3
tls: ingress
bootstrapPassword: ""
EOT 

}
aleksasiriski commented 1 year ago

Uhm, this probably isn't related to your error but a few tips:

1) If you're using Rancher use their chart for Longhorn (it's commented in your config) 2) Don't uncomment values of things that you don't change, since that breaks the use of variables that you have setup (for example ingress-replica-count var won't be used if you have nginx_values uncommented, same thing with longhorn)

@mysticaltech Idk why is nginx a DaemonSet in kube.tf.example since it's a deployment in locals.tf

I'll make it so if rancher is enabled it autoselects their Longhorn chart

donydonald1 commented 1 year ago

Uhm, this probably isn't related to your error but a few tips:

  1. If you're using Rancher use their chart for Longhorn (it's commented in your config)
  2. Don't uncomment values of things that you don't change, since that breaks the use of variables that you have setup (for example ingress-replica-count var won't be used if you have nginx_values uncommented, same thing with longhorn)

@mysticaltech Idk why is nginx a DaemonSet in kube.tf.example since it's a deployment in locals.tf

I'll make it so if rancher is enabled it autoselects their Longhorn chart

@aleksasiriski

This is my successful output and application running "using flannel";

+ kubectl get pods --all-namespaces
NAMESPACE                   NAME                                                  READY   STATUS              RESTARTS        AGE
cattle-fleet-local-system   fleet-agent-86bcc7466d-q8trg                          1/1     Running             0               3m39s
cattle-fleet-system         fleet-controller-7bbd96b579-c7jpp                     1/1     Running             0               3m54s
cattle-fleet-system         gitjob-5bd78d7cd9-5q2sm                               1/1     Running             0               3m54s
cattle-system               helm-operation-mzcgg                                  0/2     Completed           0               4m10s
cattle-system               helm-operation-nlm6r                                  0/2     Completed           0               3m39s
cattle-system               helm-operation-np4t6                                  0/2     Completed           0               3m45s
cattle-system               helm-operation-q9lpj                                  0/2     Completed           0               4m1s
cattle-system               rancher-7c5dbf46fc-cp2p8                              1/1     Running             0               4m37s
cattle-system               rancher-7c5dbf46fc-kgpjq                              1/1     Running             0               4m37s
cattle-system               rancher-7c5dbf46fc-mtb7r                              1/1     Running             0               4m37s
cattle-system               rancher-webhook-577b778f8f-fcxkb                      1/1     Running             0               3m37s
cert-manager                cert-manager-85945b75d4-7vl88                         1/1     Running             0               5m5s
cert-manager                cert-manager-85945b75d4-jq6tr                         1/1     Running             0               5m6s
cert-manager                cert-manager-85945b75d4-pj4pr                         1/1     Running             0               5m5s
cert-manager                cert-manager-cainjector-7f694c4c58-t2256              1/1     Running             0               5m6s
cert-manager                cert-manager-cainjector-7f694c4c58-vfgth              1/1     Running             0               5m6s
cert-manager                cert-manager-cainjector-7f694c4c58-vkrbh              1/1     Running             0               5m5s
cert-manager                cert-manager-webhook-7cd8c769bb-2q2fp                 1/1     Running             0               5m6s
cert-manager                cert-manager-webhook-7cd8c769bb-9h5mz                 1/1     Running             0               5m5s
cert-manager                cert-manager-webhook-7cd8c769bb-jjbfj                 1/1     Running             0               5m5s
elk-stack                   apm-server-apm-server-679fff84dc-gxp4n                0/1     ContainerCreating   0               5s
elk-stack                   apm-server-apm-server-679fff84dc-mvktr                0/1     ContainerCreating   0               5s
elk-stack                   apm-server-apm-server-679fff84dc-x9cqq                0/1     ContainerCreating   0               5s
elk-stack                   elasticsearch-master-0                                0/1     Pending             0               4s
elk-stack                   elasticsearch-master-1                                0/1     Pending             0               4s
elk-stack                   elasticsearch-master-2                                0/1     Pending             0               4s
elk-stack                   filebeat-filebeat-k6p4r                               0/1     ContainerCreating   0               5s
elk-stack                   filebeat-filebeat-tgnsb                               0/1     ContainerCreating   0               5s
elk-stack                   filebeat-filebeat-whnkf                               0/1     ContainerCreating   0               5s
elk-stack                   fluentd-0                                             0/1     Pending             0               3s
elk-stack                   fluentd-9jgjz                                         0/1     ContainerCreating   0               3s
elk-stack                   fluentd-rgf6b                                         0/1     ContainerCreating   0               3s
elk-stack                   fluentd-xgrpg                                         0/1     ContainerCreating   0               3s
elk-stack                   helm-install-apm-server-k2h8x                         0/1     Completed           0               8s
elk-stack                   helm-install-elasticsearch-82h4b                      0/1     Completed           0               8s
elk-stack                   helm-install-filebeat-24894                           0/1     Completed           0               8s
elk-stack                   helm-install-fluentd-rgwb2                            1/1     Running             0               8s
elk-stack                   helm-install-kibana-pm5bs                             1/1     Running             0               7s
elk-stack                   helm-install-logstash-dw5k7                           1/1     Running             0               5s
elk-stack                   helm-install-metricbeat-49qkp                         1/1     Running             0               5s
elk-stack                   kibana-kibana-95dc995b9-dx7pm                         0/1     Pending             0               3s
elk-stack                   kibana-kibana-95dc995b9-pwtrp                         0/1     ContainerCreating   0               3s
elk-stack                   logstash-logstash-0                                   0/1     ContainerCreating   0               4s
elk-stack                   logstash-logstash-1                                   0/1     ContainerCreating   0               4s
elk-stack                   logstash-logstash-2                                   0/1     ContainerCreating   0               4s
elk-stack                   metricbeat-kube-state-metrics-665d8f4966-2rzgz        0/1     Pending             0               2s
elk-stack                   metricbeat-metricbeat-9qx75                           0/1     ContainerCreating   0               2s
elk-stack                   metricbeat-metricbeat-ck8d9                           0/1     Pending             0               2s
elk-stack                   metricbeat-metricbeat-lsp5f                           0/1     Pending             0               2s
elk-stack                   metricbeat-metricbeat-metrics-c7d47f999-47ct9         0/1     Pending             0               2s
elk-stack                   metricbeat-metricbeat-metrics-c7d47f999-r4wfr         0/1     Pending             0               2s
elk-stack                   metricbeat-metricbeat-metrics-c7d47f999-xcfsk         0/1     Pending             0               2s
external-dns                external-dns-5d97b5857f-79cn9                         0/1     Pending             0               1s
external-dns                external-dns-5d97b5857f-dkzks                         0/1     Pending             0               1s
external-dns                external-dns-5d97b5857f-q8mlt                         0/1     Pending             0               1s
external-dns                helm-install-external-dns-mmptt                       1/1     Running             0               5s
kube-system                 cluster-autoscaler-7f7ffd47f8-5hshk                   1/1     Running             0               4m53s
kube-system                 coredns-7b5bbc6644-9ldp9                              1/1     Running             0               5m46s
kube-system                 hcloud-cloud-controller-manager-5dc7ff59d6-xhvmh      1/1     Running             0               5m41s
kube-system                 helm-install-cert-manager-zpnwb                       0/1     Completed           0               5m41s
kube-system                 helm-install-longhorn-twpm8                           0/1     Completed           0               5m41s
kube-system                 helm-install-nginx-mcjjz                              0/1     Completed           1               5m41s
kube-system                 helm-install-rancher-bhpj9                            0/1     Completed           2               5m41s
kube-system                 kured-7mnnl                                           1/1     Running             0               5m10s
kube-system                 kured-kmfqv                                           1/1     Running             0               5m12s
kube-system                 kured-ks4qd                                           1/1     Running             0               5m11s
kube-system                 kured-t7kkh                                           1/1     Running             0               4m51s
kube-system                 kured-x5p4g                                           1/1     Running             0               5m32s
kube-system                 kured-zn4w7                                           1/1     Running             0               4m35s
kube-system                 metrics-server-667586758d-64xp2                       1/1     Running             0               5m46s
longhorn-system             csi-attacher-dcb85d774-26g8z                          1/1     Running             0               4m30s
longhorn-system             csi-attacher-dcb85d774-42rzj                          1/1     Running             0               4m30s
longhorn-system             csi-attacher-dcb85d774-87v2b                          1/1     Running             0               4m30s
longhorn-system             csi-provisioner-5d8dd96b57-hwpns                      1/1     Running             0               4m30s
longhorn-system             csi-provisioner-5d8dd96b57-kdkrg                      1/1     Running             0               4m30s
longhorn-system             csi-provisioner-5d8dd96b57-wsd26                      1/1     Running             0               4m30s
longhorn-system             csi-resizer-6bf6f6f584-bjfv9                          1/1     Running             0               4m29s
longhorn-system             csi-resizer-6bf6f6f584-gxpjh                          1/1     Running             0               4m29s
longhorn-system             csi-resizer-6bf6f6f584-qshrg                          1/1     Running             0               4m29s
longhorn-system             csi-snapshotter-7cb6bf8447-55z5h                      1/1     Running             0               4m29s
longhorn-system             csi-snapshotter-7cb6bf8447-tf4fc                      1/1     Running             0               4m29s
longhorn-system             csi-snapshotter-7cb6bf8447-wkts9                      1/1     Running             0               4m29s
longhorn-system             engine-image-ei-fc06c6fb-lvpqm                        1/1     Running             0               4m41s
longhorn-system             engine-image-ei-fc06c6fb-t7mj7                        1/1     Running             0               4m41s
longhorn-system             engine-image-ei-fc06c6fb-zlkpj                        1/1     Running             0               4m41s
longhorn-system             instance-manager-e-a1d70e505dca1a93e089c572df709b69   1/1     Running             0               4m41s
longhorn-system             instance-manager-e-c3da277e05683e1dc356fba9f99ab9a6   1/1     Running             0               4m41s
longhorn-system             instance-manager-e-eed11eafe51520083404eb0052ac1f42   1/1     Running             0               4m41s
longhorn-system             instance-manager-r-a1d70e505dca1a93e089c572df709b69   1/1     Running             0               4m41s
longhorn-system             instance-manager-r-c3da277e05683e1dc356fba9f99ab9a6   1/1     Running             0               4m41s
longhorn-system             instance-manager-r-eed11eafe51520083404eb0052ac1f42   1/1     Running             0               4m40s
longhorn-system             longhorn-admission-webhook-5bcd999944-2pqcg           1/1     Running             0               5m6s
longhorn-system             longhorn-admission-webhook-5bcd999944-pg4rl           1/1     Running             0               5m6s
longhorn-system             longhorn-conversion-webhook-84fc65c775-5t49c          1/1     Running             0               5m6s
longhorn-system             longhorn-conversion-webhook-84fc65c775-cxhnm          1/1     Running             0               5m6s
longhorn-system             longhorn-csi-plugin-m2cbg                             3/3     Running             0               4m29s
longhorn-system             longhorn-csi-plugin-vqjhf                             3/3     Running             0               4m29s
longhorn-system             longhorn-csi-plugin-w8vlr                             3/3     Running             0               4m29s
longhorn-system             longhorn-driver-deployer-58bc998c6-dzqnh              1/1     Running             0               5m6s
longhorn-system             longhorn-manager-hplt9                                1/1     Running             0               5m6s
longhorn-system             longhorn-manager-rswps                                1/1     Running             0               5m6s
longhorn-system             longhorn-manager-vjzfm                                1/1     Running             1 (4m41s ago)   5m6s
longhorn-system             longhorn-recovery-backend-7cfbbd9864-fsbk4            1/1     Running             0               5m6s
longhorn-system             longhorn-recovery-backend-7cfbbd9864-v9vz2            1/1     Running             0               5m6s
longhorn-system             longhorn-ui-67867696bd-lbd2g                          1/1     Running             0               5m6s
longhorn-system             longhorn-ui-67867696bd-ss7p8                          1/1     Running             0               5m6s
nginx                       nginx-ingress-nginx-controller-b65jl                  1/1     Running             0               29s
nginx                       nginx-ingress-nginx-controller-dxzrp                  1/1     Running             0               29s
nginx                       nginx-ingress-nginx-controller-zp8wt                  0/1     Running             0               29s
system-upgrade              system-upgrade-controller-5d4ff9f49-lkgwb             1/1     Running             0               5 /0.4s

the above result is from the same kube.tf sent above which appears to be what i wanted. Are you saying setting #ingress-replica-count or #nginx-values would do the same thing?.

aleksasiriski commented 1 year ago

using something_values overrides the default as well as all `something_option

so if you set nginx_values for example, then ingress_replica_count is ignored and only values is used

skipworkgh commented 1 year ago

The configured calico.yaml is MIA.

I've manually replaced the local.tf values with https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml and it's working again

aleksasiriski commented 1 year ago

The configured calico.yaml is MIA.

I've manually replaced the local.tf values with https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml and it's working again

Nice catch, guess we will have to switch to poorly documented helm chart now @mysticaltech

mysticaltech commented 1 year ago

Good catch @skipworkgh.

@aleksasiriski About the daemonset in the example, I do not know how it got there, but it's an example (and probably functional) so people can customize it.

Now as you said, best we move to the helm chart ASAP.

mysticaltech commented 1 year ago

@aleksasiriski Helm is cumbersome for calico, for now, let's just stick with the manifest. I am working on a fix for now.