Closed mark-ship-it closed 1 year ago
this project is not related to the EBS CSI driver, nor EKS
whoops. my mistake. thanks
I'm going to lock this issue because it has been closed for 30 days β³. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Description
Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the
examples/*
directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by runningterraform init && terraform apply
without any further changes.If your request is for a new feature, please use the
Feature request
template.β οΈ Note
Before you submit an issue, please perform the following first:
.terraform
directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/
terraform init
Versions
Module version [Required]: 18.26.6 and 19.16.0
Terraform version:
1.4.5
Provider version(s):
the relevant subset:
β βββ module.eks β βββ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0 β βββ provider[registry.terraform.io/hashicorp/tls] ~> 3.0 β βββ provider[registry.terraform.io/hashicorp/kubernetes] >= 2.10.0 β βββ module.self_managed_node_group β βββ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0 β βββ module.user_data β βββ provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0 β βββ module.eks_managed_node_group β βββ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0 β βββ module.user_data β βββ provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0 β βββ module.fargate_profile β βββ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0 β βββ module.kms β βββ provider[registry.terraform.io/hashicorp/aws] >= 3.72.0
Reproduction Code [Required]
module "eks" { source = "terraform-aws-modules/eks/aws" version = "19.16.0"
cluster_name = var.eks_cluster_name cluster_version = var.eks_cluster_version
kms_key_administrators = var.kms_key_administrators
vpc_id = var.vpc_id subnet_ids = local.eks_subnet_ids
cluster_endpoint_private_access = var.eks_endpoint_private_access cluster_endpoint_public_access = var.eks_endpoint_public_access
Temp workaround for bug : double owned tag
https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1810
node_security_group_tags = { "kubernetes.io/cluster/${var.eks_cluster_name}" = null }
eks_managed_node_group_defaults = { ami_type = "AL2_x86_64" key_name = var.aws_keypair_name attach_cluster_primary_security_group = true
Disabling and using externally provided security groups
}
eks_managed_node_groups = local.node_groups
tags = merge({ Name = var.eks_cluster_name}, var.tags ) }
Steps to reproduce the behavior:
no
this is a problem in CI/CD (github actions), so there is no local cache
deployed via laptop (works fine), but when i introduce another principal to deploy it, i run into a problem. I can reproduce when i assume the CI/CD role and run an apply
Expected behavior
after the summary of the plan, i expect terraform to return a code of 0 and terminate
Actual behavior
terraform plan returns an error:
terraform plan ... all output is as expected ... Plan: 25 to add, 74 to change, 19 to destroy. Releasing state lock. This may take a few moments... Error: Process completed with exit code 1.
Error: Unauthorized
with module.dockyard.kubernetes_storage_class.gp3,
on .terraform/modules/dockyard/terraform/eks-addon.tf line 37, in resource "kubernetes_storage_class" "gp3":
37: resource "kubernetes_storage_class" "gp3" {
This seems to be a permissions issue of having multiple principals deploying the EBS CSI driver. I filed a ticket with Amazon support, and the IAM roles seem to be set up properly.
Terminal Output Screenshot(s)
Additional context