Closed JPurcellHCP closed 1 year ago
As a note, you also can't destroy this once the apply has failed, as the EKS module has partially built, but on Line 242 it's expecting the module.ebs_kms_key.key_arn. Which as the kms_key failed to build/provide any outputs the terraform init part of the destroy command errors out with
module.ebs_kms_key is object with 5 attributes This object does not have an attribute named "key_arn"
Commenting out line 242 allows you to destroy this. Or adding in a depends_on to the module could circumvent this.
Hello,
If this is a fresh new AWS account that never had an ASG, you might be missing the role:
AWSServiceRoleForAutoScaling
Try to run this command before doing the apply:
aws iam create-service-linked-role --aws-service-name autoscaling.amazonaws.com
The policy for KMS contains this role as a principal (the ASG should be able to use this KMS to decrypt the EBS volumes from the managed NodeGroup) but it might not exist in the account if you didn't create it or create an ASG before thus creating it manually should solve the issue.
This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days
This issue was automatically closed because of stale in 10 days
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Description
Using the /examples/eks_managed_node_group cloned as of today. Running a terraform init & plan is fine and shows the expected output. But when running an apply it fails at the stage of creating the KMS key.
The error code provided is
╷ │ Error: creating KMS Key: MalformedPolicyDocumentException: Policy contains a statement with one or more invalid principals. │ │ with module.ebs_kms_key.aws_kms_key.this[0], │ on .terraform/modules/ebs_kms_key/main.tf line 8, in resource "aws_kms_key" "this": │ 8: resource "aws_kms_key" "this" { │ ╵ Operation failed: failed running terraform apply (exit 1)
Versions
Module version [Required]: Latest
Terraform version: v1.4.2
Provider version(s): AWS >= 4.47 Kubernetes >= 2.10 Terraform >=1.0
Reproduction Code [Required]
Steps to reproduce the behavior:
Git clone the repository, cd to /examples/eks_managed_node_group/, create a CLI driven workspace in TFCB, copy & paste the workspace cloud stanza, push AWS credentials to the workspace, run a terraform init, terraform plan, terraform apply.
Expected behavior
The workspace successfully finishes applying and an EKS cluster is created
Actual behavior
After attempting to build module.ebs_kms_key.aws_kms_key.this[0] for roughly 1m50s it returns the following error
╷ │ Error: creating KMS Key: MalformedPolicyDocumentException: Policy contains a statement with one or more invalid principals. │ │ with module.ebs_kms_key.aws_kms_key.this[0], │ on .terraform/modules/ebs_kms_key/main.tf line 8, in resource "aws_kms_key" "this": │ 8: resource "aws_kms_key" "this" { │ ╵ Operation failed: failed running terraform apply (exit 1)
Thus stopping the apply and any dependencies on module.ebs_kms_key are also not created
Additional context
The plan finishes after 73 things have been created, key_pair, vpc, and some parts of the eks module are all built, however the kms_key and all of it's dependencies are blocked due to this failure.