aws-samples / eks-blueprints-add-ons

MIT No Attribution
139 stars 763 forks source link

ArgoCD App of Apps pattern not applying annotations #48

Closed ptravishill closed 2 years ago

ptravishill commented 2 years ago

I am using eks-blueprint-add-ons along with ArgoCD App of Apps pattern via eks-blueprints and terraform. Below is my configs. I cannot get the service account created for the cluster autoscaler add on to properly populate annotations so IRSA works correctly.

ArgoCD terraform config:

module "eks_blueprints_kubernetes_addons" {
  source                            = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.0.4"
  eks_cluster_id                    = module.eks.cluster_id
  enable_argocd                     = true
  argocd_manage_add_ons             = true
  argocd_admin_password_secret_name = data.aws_secretsmanager_secret.argocd_password.arn

  argocd_helm_config = {
    name             = "argo-cd"
    chart            = "argo-cd"
    repository       = "https://argoproj.github.io/argo-helm"
    version          = "4.6.0"
    namespace        = "argocd"
    timeout          = "1200"
    create_namespace = true
    values           = [templatefile("${path.module}/argocd-values.yaml", {})]
  }

  argocd_applications = {
    workloads = {
      path                = "envs/dev"
      repo_url            = "<my private repo>"
      ssh_key_secret_name = data.aws_secretsmanager_secret.argocd_sshkey.arn
      values              = {}
    }
    addons = {
      path                = "chart"
      repo_url            = "<my private repo>"
      add_on_application  = true
      ssh_key_secret_name = data.aws_secretsmanager_secret.argocd_sshkey.arn
      values              = {}
    }
  }
  depends_on = [module.eks.eks_managed_node_groups]
}

ArgoCD helm values.yaml:

redis-ha:
  enabled: true

controller:
  enableStatefulSet: true

server:
  autoscaling:
    enabled: true
    minReplicas: 2

repoServer:
  autoscaling:
    enabled: true
    minReplicas: 2

In /add-ons/cluster-autoscaler/values.yaml:

rbac:
  create: true
  serviceAccount:
    create: true
    annotations:
      eks.amazonaws.com/role-arn: "arn:aws:iam::<my-account>:role/service-role/<my-role>"

In /add-ons/cluster-autoscaler/Chart.yaml:

apiVersion: v2
name: cluster-autoscaler
description: A Helm chart for installing cluster-autoscaler
type: application

# The chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.1.0

# Version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: "1.0"

dependencies:
  - name: cluster-autoscaler
    version: 9.15.0
    repository: https://kubernetes.github.io/autoscaler

In /chart/values.yaml:

# Cluster Autoscaler Values
clusterAutoscaler:
  enable: true
  serviceAccountName: clusterautoscaler

ArgoCD sees the values: Screen Shot 2022-05-26 at 9 13 37 AM

However, the serviceaccount created in the cluster does not have the annotation:

kubectl describe serviceaccount clusterautoscaler -n kube-system

Name:                clusterautoscaler
Namespace:           kube-system
Labels:              app.kubernetes.io/instance=cluster-autoscaler
                     app.kubernetes.io/managed-by=Helm
                     app.kubernetes.io/name=aws-cluster-autoscaler
                     argocd.argoproj.io/instance=cluster-autoscaler
                     helm.sh/chart=cluster-autoscaler-9.15.0
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   clusterautoscaler-token-lrh5q
Tokens:              clusterautoscaler-token-lrh5q
Events:              <none>

Am i doing something wrong here? Appreciate the help.

EDIT: To add to this, i ended up modifying the values.yaml to such:

cluster-autoscaler:
  extraArgs:
    v: 4
    stderrthreshold: info
    cloud-provider: aws
    logtostderr: true
    skip-nodes-with-local-storage: false
    expander: least-waste
    balance-similar-node-groups: true
    skip-nodes-with-system-pods: false

  deployment:
    annotations:
      cluster-autoscaler.kubernetes.io/safe-to-evict: "false"

  rbac:
    create: true
    serviceAccount:
      annotations:
        eks.amazonaws.com/role-arn: "arn:aws:iam::<my-account>:role/service-role/<my-role>"
      create: true
      name: clusterautoscaler

  resources:
    limits:
      cpu: 200m
      memory: 512Mi
    requests:
      cpu: 200m
      memory: 512Mi

Still doesnt work via ArgoCD. However, if run a helm template on values.yaml its working correctly:

# Source: cluster-autoscaler/charts/cluster-autoscaler/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/instance: "test"
    app.kubernetes.io/name: "aws-cluster-autoscaler"
    app.kubernetes.io/managed-by: "Helm"
    helm.sh/chart: "cluster-autoscaler-9.15.0"
  name: clusterautoscaler
  annotations: 
    eks.amazonaws.com/role-arn: arn:aws:iam::<my-account>:role/service-role/<my-role>
automountServiceAccountToken: true
ptravishill commented 2 years ago

I was able to resolve this. Something was wrong with my values.yaml file and version 9.15.0. After taking the values from the latest chart and updating the Chart.yaml and refreshed, it started working.