Closed hashibot closed 1 year ago
This is a tad more important now considering the upcoming changes enforcing service role utilisation on March 26th; especially where external CMK's are in use. Attaching a policy to a key that exists outside of Terraform's state will become less of an edge case.
I'd be more than happy to take this on, should have a free Sunday afternoon.
Any word on this, or potential workarounds? We're trying to create a key policy that allows access to our kinesis resources. Unfortunately the kinesis resources need the ARN of the KMS key, and the KMS Key Policy now needs the ARNs of the kinesis resources, and terraform errors out with a cycle detection issue.
Anyone have workaround with assigning policy which allow A service to use this KMS? Current aws_kms_key
resource only allow assigning policy on it self, so cannot look up self kms_arn. If I can attach policy after kms creation, I can look up kms arn and create policy to assign.
aws_kms_key_policy_attachment
is definitely what I want and cover this scenario.
{
"Id": "key-consolepolicy",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::111122223333:root"},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "Allow GuardDuty to use the key",
"Effect": "Allow",
"Principal": {"Service": "guardduty.amazonaws.com"},
"Action": "kms:GenerateDataKey",
"Resource": "arn:aws:kms:[region]:111122223333:key/[KMSKeyId]"
}
]
}
@guitarrapc you can use an aws_iam_policy_document
. See my doc PR.
UPDATE: Ah, just realized everyone is talking about updating the key policy after the key has been created. Does updating the policy =
argument not allow that?
This is an important feature for e.g. cloudwatch log groups.
I need my KMS key
policy to know the cloudwatch log group
arn (doc)
I need my cloudwatch log group
to know my KMS key
ARN (terraform resource)
Agreed with all the above. I was trying to follow the policy recommendations for setting up a secure bucket for cloudtrail logging, but they want the key_id for some of the rules. So I can't create the policy doc without the key or the key without the policy doc. Having a separate aws_key_policy resource (it's not really an attachment because we're supplying a policy document and not a policy resource) would break that dependency cycle and allow us to create these resources the way we would with CLI or console.
I have another use-case that requires this. Some keys must have custom policies due to the way some AWS services work (notably CloudWatch Logs and AWS AutoScaling). If your TF code is structured something like this:
a) Create KMS keys. b) Create IAM user policies (that refer to those keys). ... x) Create additional IAM principals as part of your module, for which there must be custom key policies.
then you'd naturally want to have the following continuation:
y) Create KMS custom key policies that reference IAM principals from x). z) Attach custom key policies to keys created in a).
As the feature is missing, you have a problem: you are tied to creating KMS keys together with their policies in a). From here you have 2 options:
1) Create your additional IAM principals from x) also in a), which messes up your (linear) project structure. 2) Run your TF code in two stages: comment out the key policies in a) (because additional principals, to which they refer, do not exist yet), run the project, then remove the comments in a) and run the project again.
For the time being I implemented a workaround using the null_resource
and AWS CLI (the usual approach to circumvent gaps in TF):
resource "null_resource" "kms_key_policy_assoc" {
provisioner "local-exec" {
command = <<DOC
aws kms put-key-policy --key-id ${var.my_kms_key_id} --policy-name default --policy ${replace(jsonencode(data.aws_iam_policy_document.my_custom_key_policy.json), "\\n", "")}
DOC
on_failure = fail
}
triggers = {
rerun_upon_change_of = sha1(data.aws_iam_policy_document.my_custom_key_policy.json)
}
}
I am encountering the same circular dependency issue explained within this thread: the KMS policy requires entries from another resource, while the latter needs the KMS ARN to be created.
Being able to leverage the PutKeyPolicy endpoint of AWS KMS API would be great to break the circle, something like:
aws_kms_key
doesn't have any policy definitionaws_kms_key_policy_attachment
can then be used to attach a policy, including ARN references from other resourcesWonder how we can raise the attention on this issue to the right owners: from this document, AWS seems to have a dedicated channel to escalate issues and prioritize them.
Hi everyone, was just looking at this and wanted to share my initial thoughts -
Unlike other resources, like S3 Buckets, where a bucket policy does not exist for a bucket unless you define one, KMS keys will automatically generate and assign a default policy, even if you do not specify one during creation.
Creating a resource such as aws_kms_key_policy_attachment
would imply not attaching a policy - but replacing the existing policy for the key. In cases where it's still the default, that may be ok, but in some other cases where that key may have been modified elsewhere, that can be a bit dangerous.
It's also worth noting that the CloudFormation implementation of this is the same as Terraform today.
I was puzzled as to why I myself haven't run into this issue in the past, and it seems like I have been creating KMS Key Policies as templates and filing them in with the templatefile
function. This can also be accomplished in a similar manner using aws_iam_policy_document
. For example,
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${data_aws_caller_identity_current_account_id}:root"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Principal": {
"Service": "logs.${data_aws_region_current_name}.amazonaws.com"
},
"Action": [
"kms:Encrypt*",
"kms:Decrypt*",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:Describe*"
],
"Resource": "*",
"Condition": {
"ArnLike": {
"kms:EncryptionContext:aws:logs:arn": "arn:aws:logs:${data_aws_region_current_name}:${data_aws_caller_identity_current_account_id}:log-group:{log_group_name}:*"
}
}
}
]
}
I'm wondering what the community's thoughts are on the problem statement above and the proposed solution.
Creating a resource such as aws_kms_key_policy_attachment would imply not attaching a policy - but replacing the existing policy for the key. In cases where it's still the default, that may be ok, but in some other cases where that key may have been modified elsewhere, that can be a bit dangerous.
Good point, on top of that, if aws_kms_key
has a policy defined, it will be a race condition given that both resources will trigger an update.
A possible solution could be using aws_kms_key
without a policy defined and then replacing the default with a complete definition applied through aws_kms_key_policy_attachment
.
By doing so, we will break the circular dependency and cover an additional provisioning scenario through terraform as well.
This functionality has been released in v4.59.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
This issue was originally opened by @kojiromike as hashicorp/terraform#11137. It was migrated here as part of the provider split. The original body of the issue is below.
Terraform Version
0.8.3
Affected Resource(s)
Terraform Configuration Files
Perhaps something like
Important Factoids
The AWS API supports modifying the policy for a key after it's created. Sometimes it's important to do this to get ordering right in Terraform, particularly if the key itself is unmanaged or in a migration.