Open t0yv0 opened 1 week ago
Voting for Prioritization
Volunteering to Work on This Issue
Hey @t0yv0 👋 Thank you for taking the time to raise this! In this case, the resource is behaving as I would expect it to, and as mentioned in the warning in the resource documentation:
To prevent persistent drift, ensure any
aws_iam_role_policy
resources managed alongside this resource are included in thepolicy_names
argument.
The reason you're seeing the specific behavior you outlined can be explained as follows:
During the initial apply, aws_iam_role_policy.policy1
and aws_iam_role_policy.policy2
are applied to the role. Because aws_iam_role_policies_exclusive.example_policies
has an implicit dependency on both of these aws_iam_role_policy
resources, Terraform is able to correctly order the creation of the resource, resulting in a stable configuration that matches reality. Further terraform apply
's will result in "no configuration changes" (barring any policies being applied outside of Terraform).
This is the point where the following note from the documentation becomes relevant:
This will not prevent inline policies from being assigned to a role via Terraform (or any other interface). This resource enables bringing inline policy assignments into a configured state, however, this reconciliation happens only when
apply
is proactively run.
Since aws_iam_role_policies_exclusive.example_policies
has no dependency on aws_iam_role_policy.policy3
, when Terraform is run again, during the plan
phase, aws_iam_role.example
still only has policy1
and policy2
attached. With that in mind, Terraform detects no changes to aws_iam_role_policies_exclusive.example_policies
, and thus determines that no actions will need to be taken during the apply
. During the apply, aws_iam_role_policy.policy3
is added to the aws_iam_role.example
, as requested.
During the plan
phase of the third apply
, Terraform will detect that all three aws_iam_role_policy
resources have no changes, however, aws_iam_role_policies_exclusive.example_policies
will now detect the addition of policy3
, which it will then attempt to remove. The aws_iam_role_policies_exclusive.example_policies
and aws_iam_role_policy.policy3
resources will go back and forth adding and subsequently removing the policy on each terraform apply
.
The reason for a lack of an error message is twofold, the first bit being the options offered upstream. As far as I'm aware, there's not an upstream API that would allow for locking a role to a specific set of policies. Instead, during the plan
phase of a Terraform run, the resource reads the policies that are attached to the role, compares that to the configuration, and makes any adjustments that are necessary in order to ensure the policies provided in policy_names
are the only ones attached. Put another way, the "exclusivity" is reactively managed by Terraform. The second part of the story is that there's not a method for instructing Terraform that if x
resource exists in a configuration, then y
other resources should not be present, or configured in a specific way. Since that's not possible, there's no way for aws_iam_role_policies_exclusive.example_policies
to proactively determine that there are aws_iam_role_policy
resources in the configuration that are adding policies to the role that aren't already in the policy_names
configuration in order to raise an error.
I hope that information helps. If any of that is unclear, or if you have any follow up questions, let me know. Otherwise, we'll get this one closed out. As always, we appreciate your feedback!
@justinretzolk thank you! Feel free to close the issue if there is no intent to change this. Indeed this summarizes it nicely:
Put another way, the "exclusivity" is reactively managed by Terraform.
I saw the docs and this reconciliation happens only when apply is proactively run
covers this case I think.
You might consider adding a warning that this resource does not manage exclusivity with -refresh=false
and is incompatible with that flag, something else I discovered. Without refresh, new inline policies are accepted silently. This is probably not a deal breaker since most users do not set this flag.
Glad that helped @t0yv0! One quick point of clarification before we get this closed out:
Without refresh, new inline policies are accepted silently.
In your experience, was this only for the run that included -refresh=false
, or did you notice functionality breaking on subsequent runs without the flag too?
Terraform Core Version
1.8.3
AWS Provider Version
5.75.1
Affected Resource(s)
Expected Behavior
After I locked the set of policy names in an aws_iam_role_policies_exclusive resource, I would expect that adding additional aws_iam_role_policy that are not allow-listed in the exclusive resource is going to be rejected with an error, or at least not applied to the actual cloud.
Actual Behavior
Terraform adds the additional role policies even if they are not listed in the aws_iam_role_policies_exclusive resource.
Relevant Error/Panic Output Snippet
Terraform Configuration Files
Steps to Reproduce
Debug Output
N/A
Panic Output
N/A
Important Factoids
Subsequent
terraform plan
detects something is wrong:And using
terraform apply
gets to the desired state:However it is unfortunate that two
terraform apply
invocations are required to get to the desired state.References
N/A
Would you like to implement a fix?
No