terraform-aws-modules / terraform-aws-efs

Terraform module to create AWS EFS resources πŸ‡ΊπŸ‡¦
https://registry.terraform.io/modules/terraform-aws-modules/efs/aws
Apache License 2.0
24 stars 38 forks source link

EBS CSI driver: unauthorized (when deploying in CI/CD) #14

Closed mark-ship-it closed 1 year ago

mark-ship-it commented 1 year ago

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

β”‚ └── module.eks β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0 β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/tls] ~> 3.0 β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/kubernetes] >= 2.10.0 β”‚ β”œβ”€β”€ module.self_managed_node_group β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0 β”‚ └── module.user_data β”‚ └── provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0 β”‚ β”œβ”€β”€ module.eks_managed_node_group β”‚ β”œβ”€β”€ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0 β”‚ └── module.user_data β”‚ └── provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0 β”‚ β”œβ”€β”€ module.fargate_profile β”‚ └── provider[registry.terraform.io/hashicorp/aws] >= 3.72.0 β”‚ └── module.kms β”‚ └── provider[registry.terraform.io/hashicorp/aws] >= 3.72.0

Reproduction Code [Required]

module "eks" { source = "terraform-aws-modules/eks/aws" version = "19.16.0"

cluster_name = var.eks_cluster_name cluster_version = var.eks_cluster_version

kms_key_administrators = var.kms_key_administrators

vpc_id = var.vpc_id subnet_ids = local.eks_subnet_ids

cluster_endpoint_private_access = var.eks_endpoint_private_access cluster_endpoint_public_access = var.eks_endpoint_public_access

Temp workaround for bug : double owned tag

https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1810

node_security_group_tags = { "kubernetes.io/cluster/${var.eks_cluster_name}" = null }

eks_managed_node_group_defaults = { ami_type = "AL2_x86_64" key_name = var.aws_keypair_name attach_cluster_primary_security_group = true

Disabling and using externally provided security groups

create_security_group = false
vpc_security_group_ids = var.eks_vpc_security_groups
iam_role_name = "${var.eks_cluster_name}_ng"
block_device_mappings = {
    xvda = {
      device_name = "/dev/xvda"
      ebs = {
        volume_size           = var.eks_node_disk_size
        volume_type           = "gp3"
      }
    }
}

}

eks_managed_node_groups = local.node_groups

tags = merge({ Name = var.eks_cluster_name}, var.tags ) }

Steps to reproduce the behavior:

no

this is a problem in CI/CD (github actions), so there is no local cache

deployed via laptop (works fine), but when i introduce another principal to deploy it, i run into a problem. I can reproduce when i assume the CI/CD role and run an apply

Expected behavior

after the summary of the plan, i expect terraform to return a code of 0 and terminate

Actual behavior

terraform plan returns an error:

terraform plan ... all output is as expected ... Plan: 25 to add, 74 to change, 19 to destroy. Releasing state lock. This may take a few moments... Error: Process completed with exit code 1.

Error: Unauthorized

with module.dockyard.kubernetes_storage_class.gp3,

on .terraform/modules/dockyard/terraform/eks-addon.tf line 37, in resource "kubernetes_storage_class" "gp3":

37: resource "kubernetes_storage_class" "gp3" {

This seems to be a permissions issue of having multiple principals deploying the EBS CSI driver. I filed a ticket with Amazon support, and the IAM roles seem to be set up properly.

Terminal Output Screenshot(s)

Additional context

bryantbiggs commented 1 year ago

this project is not related to the EBS CSI driver, nor EKS

mark-ship-it commented 1 year ago

whoops. my mistake. thanks

github-actions[bot] commented 12 months ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.