terraform-aws-modules / terraform-aws-iam

Terraform module to create AWS IAM resources πŸ‡ΊπŸ‡¦
https://registry.terraform.io/modules/terraform-aws-modules/iam/aws
Apache License 2.0
779 stars 985 forks source link

VPC CNI Policy is missing CloudWatch Logs permissions if you enable Network Policy logs #482

Closed jmgalvez closed 2 months ago

jmgalvez commented 4 months ago

Description

Submodule: iam-role-for-service-accounts-eks

The VPC CNI Policy in https://github.com/terraform-aws-modules/terraform-aws-iam/blob/v5.39.0/modules/iam-role-for-service-accounts-eks/policies.tf is missing some permissions when AWS VPC CNI Network Policy logs are enabled

When network policy is enabled on VPC CNI add-on, a second container is added to the aws-node pod for a node agent. This node agent can send the network policy logs to CloudWatch logs.

With the current configuration, aws-node is in a CrashLoopBackOff state because that container does not have the right permissions related to CloudWatch logs.

Versions

Reproduction Code

I am creating the EKS cluster by using the AWS EKS Terraform module 20.8.5. When setting up the cluster addons I am enabling the Network Policy and the Network Policy logs as we can see below:

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.8.5"

  cluster_addons = {
    coredns = {
      addon_version = var.cluster_addons_versions.coredns
    }
    kube-proxy = { # Upgrade of this component usually makes sense after the control plane upgrade
      addon_version = var.cluster_addons_versions.kube_proxy
    }
    vpc-cni = {
      before_compute           = true
      addon_version            = var.cluster_addons_versions.vpc_cni
      service_account_role_arn = module.vpc_cni_irsa.iam_role_arn

      configuration_values = jsonencode({
        "enableNetworkPolicy" : "true",   <==== here
        "nodeAgent" : {                  <===== here 
          "enablePolicyEventLogs" : "true",
          "enableCloudWatchLogs" : "true"
        }
      })
    }

The IRSA role is created by using the submodule iam-role-for-service-accounts-eks in this repo.

module "vpc_cni_irsa" {
  source  = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
  version = "5.39.0"

  role_name             = "${local.resource_prefix}-vpc-cni-irsa-role"
  attach_vpc_cni_policy = true             < ==== Attaching
  vpc_cni_enable_ipv4   = true

  oidc_providers = {
    main = {
      provider_arn               = module.eks.oidc_provider_arn
      namespace_service_accounts = var.irsa_service_accounts.namespace_service_account_vpc_cni
    }
  }
}

Expected behavior

It would be nice to add those permissions to the policy file.

Based on https://docs.aws.amazon.com/eks/latest/userguide/cni-network-policy.html#cni-network-policy-setup the following permissions should be added

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:DescribeLogGroups",
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        }
    ]
}

Actual behavior

aws-node is in a CrashLoopBackOff state because that policy does not have the right permissions related to CloudWatch logs.

github-actions[bot] commented 3 months ago

This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days

github-actions[bot] commented 2 months ago

This issue was automatically closed because of stale in 10 days

github-actions[bot] commented 1 month ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

antonbabenko commented 1 month ago

This issue has been resolved in version 5.42.0 :tada: