Closed Fydon closed 5 months ago
Hi @Fydon, thanks for the issue. I've raised this with the team so it can be prioritized appropriately and added to our work stack.
Hello @roothorp, is there some news regarding this issue?
Unfortunately this is still an issue with the latest:
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
const lambdaRolePolicy = {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com",
},
"Effect": "Allow",
"Sid": "",
},
],
};
const role = new aws.iam.Role("my-role", {
assumeRolePolicy: JSON.stringify(lambdaRolePolicy),
});
const attachment = new aws.iam.RolePolicyAttachment("my-rpa", {
role: role,
policyArn: aws.iam.ManagedPolicy.LambdaFullAccess,
});
CLI
Version 3.97.0
Go Version go1.21.4
Go Compiler gc
Plugins
NAME VERSION
aws 6.17.0
awsx 2.4.0
docker 4.5.0
docker 3.6.1
nodejs unknown
Host
OS darwin
Version 14.1.1
Arch x86_64
This project is written in nodejs: executable='/Users/t0yv0/bin/node' version='v18.18.2'
Current Stack: t0yv0/aws-2246/dev
TYPE URN
pulumi:pulumi:Stack urn:pulumi:dev::aws-2246::pulumi:pulumi:Stack::aws-2246-dev
pulumi:providers:aws urn:pulumi:dev::aws-2246::pulumi:providers:aws::default_6_17_0
aws:iam/role:Role urn:pulumi:dev::aws-2246::aws:iam/role:Role::my-role
aws:iam/rolePolicyAttachment:RolePolicyAttachment urn:pulumi:dev::aws-2246::aws:iam/rolePolicyAttachment:RolePolicyAttachment::my-rpa
Found no pending operations associated with dev
Backend
Name pulumi.com
URL https://app.pulumi.com/t0yv0
User t0yv0
Organizations t0yv0, pulumi
Token type personal
Dependencies:
NAME VERSION
@pulumi/pulumi 3.99.0
@types/node 18.19.3
@pulumi/aws 6.17.0
@pulumi/awsx 2.4.0
Pulumi locates its logs in /var/folders/gk/cchgxh512m72f_dmkcc3d09h0000gp/T/com.apple.shortcuts.mac-helper// by default
After calling pulumi up, pulumi refresh shows a non-empty diff and wants to add managedPolicyArns to the outputs of aws.iam.Role. If I let it proceed then subsequent pulumi up
invocations are working as expected, but this behavior is confusing.
"managedPolicyArns": [
"arn:aws:iam::aws:policy/AWSLambda_FullAccess"
],
Noting that this issue also seems to affect Node overlays such as bucket.onObjectCreated, and most of the examples in the pulumi-aws repo, such as this one:
https://github.com/pulumi/pulumi-aws/blob/master/examples/bucket/index.ts#L36
Also seeing this but if you run the CLI up or refresh commands enough, the Role Policy Attachment resource will get deleted and then recreated again.
So with the following code:
import * as aws from "@pulumi/aws";
const ec2Role = new aws.iam.Role("piers-role", {
assumeRolePolicy: aws.iam.assumeRolePolicyForPrincipal(aws.iam.Principals.Ec2Principal),
managedPolicyArns: [
"arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy",
]
});
const rpa = new aws.iam.RolePolicyAttachment("piers-rpa", {
policyArn: "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
role: ec2Role.name
})
I get the following results in the output:
pulumi up -f
(with an empty stack):pulumi refresh
:accepting causes this:
pulumi up -f
Updating (dev)
View in Browser (Ctrl+O): https://app.pulumi.com/pierskarsenbarg/iam-role-attachments/dev/updates/12
Type Name Status Info
pulumi:pulumi:Stack iam-role-attachments-dev
~ └─ aws:iam:Role piers-role updated (1s) [diff: ~managedPolicyArns]
Resources:
~ 1 updated
2 unchanged
Duration: 6s
pulumi refresh -f
:pulumi up -f
:and we've gone full circle. There were no code changes between the first pulumi up
and the last one.
I believe the Pulumi provider inherits this behavior from Terraform in https://github.com/hashicorp/terraform-provider-aws/blob/master/internal/service/iam/role.go#L640
This is somewhat a tough problem. I understand that the TF provider introduced these "sidecar" resources as a recommended alternative to the resource options like managed_policy_arns. In your use cases, do you always use sidecar resources or you use a mix?
Ideally we would fix it at the provider level, but the provider has only visibility per resource in the current lifecycle, so that the Role resource cannot accurately guess if RolePolicyAttachment managing the stack is present or not. However we could perhaps make a provider-level configuration option that would force Role and other resources ignore properties managed by sidecar resources. Users that prefer those in their stacks could then opt into the desired behavior.
I believe that the example @pierskarsenbarg provides is expected behaviour called out in the documentation:
NOTE: If you use this resource’s managed_policy_arns argument or inline_policy configuration blocks, this resource will take over exclusive management of the role’s respective policy types (e.g., both policy types if both arguments are used). These arguments are incompatible with other ways of managing a role’s policies, such as aws.iam.PolicyAttachment, aws.iam.RolePolicyAttachment, and aws.iam.RolePolicy. If you attempt to manage a role’s policies by multiple means, you will get resource cycling and/or errors.
The issue I'm raising is that pulumi refresh
causes managed_policy_arns
to be set even if you are only using aws.iam.PolicyAttachment
, aws.iam.RolePolicyAttachment
, and aws.iam.RolePolicy
, which then results in resources being deleted and then only potentially recreated. This means that I need to run pulumi refresh
multiple times to have the environment actually have the policy attachment recorded in the pulumi state.
@Fydon good catch on that documentation
I believe I understand your issue @Fydon - the problem is that this behavior is also how Terraform works and we are by default inheriting this behavior.
resource "aws_iam_role" "test_role" {
name = "test_role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "lambda.amazonaws.com"
}
},
]
})
}
data "aws_iam_policy_document" "policy" {
statement {
effect = "Allow"
actions = ["ec2:Describe*"]
resources = ["*"]
}
}
resource "aws_iam_policy" "policy" {
name = "test-policy-anton"
description = "A test policy"
policy = data.aws_iam_policy_document.policy.json
}
resource "aws_iam_role_policy_attachment" "test-attach" {
role = aws_iam_role.test_role.name
policy_arn = aws_iam_policy.policy.arn
}
Then after terraform apply
, the state of the role is:
resource "aws_iam_role" "test_role" {
arn = "arn:aws:iam::616138583583:role/test_role"
assume_role_policy = jsonencode(
{
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
Sid = ""
},
]
Version = "2012-10-17"
}
)
create_date = "2024-04-03T17:11:02Z"
force_detach_policies = false
id = "test_role"
managed_policy_arns = []
max_session_duration = 3600
name = "test_role"
path = "/"
tags = {}
tags_all = {}
unique_id = "AROAY65FYVYP4VQ7PMG6T"
}
However if you do a terraform refresh
after this you get:
# aws_iam_role.test_role:
resource "aws_iam_role" "test_role" {
arn = "arn:aws:iam::616138583583:role/test_role"
assume_role_policy = jsonencode(
{
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
Sid = ""
},
]
Version = "2012-10-17"
}
)
create_date = "2024-04-03T17:11:02Z"
force_detach_policies = false
id = "test_role"
managed_policy_arns = [
"arn:aws:iam::616138583583:policy/test-policy-anton",
]
max_session_duration = 3600
name = "test_role"
path = "/"
tags = {}
tags_all = {}
unique_id = "AROAY65FYVYP4VQ7PMG6T"
}
We're actually considering fixing this, but either fixing upstream or in the bridged provider is not straightforward since aws_iam_role cannot easily access the information on whether aws_iam_role_policy_attachment exists for this role or not. Since this is difficult I'm looking for feedback on whether:
Changing the default to not import managing_policy_arns might break users that are explicitly using this instead of the sidecar resource.
Venelin suggested ignoreChanges workaround, let me try it here to see what happens.
ignoreChanges unfortunately is not sufficient to work around here, but https://github.com/pulumi/pulumi/pull/16015 should be able to.
Looking at this in some more detail to see if we could fix anything here provider side. For the variation of the program provided by @pierskarsenbarg :
import * as aws from "@pulumi/aws";
const ec2Role = new aws.iam.Role("piers-role", {
assumeRolePolicy: aws.iam.assumeRolePolicyForPrincipal(aws.iam.Principals.Ec2Principal),
});
const rpa = new aws.iam.RolePolicyAttachment("piers-rpa", {
policyArn: "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
role: ec2Role.name
}, {
ignoreChanges: ["managedPolicyArns"],
})
We do a pulumi up followed by pulumi refresh. gRPC log of the refresh has these two Read calls that it appears are not guaranteed to be ordered:
{
"method": "/pulumirpc.ResourceProvider/Read",
"request": {
"id": "piers-role-ccc6989-20240422173755107000000001",
"urn": "urn:pulumi:dev::aws-2246::aws:iam/rolePolicyAttachment:RolePolicyAttachment::piers-rpa",
"properties": {
"id": "piers-role-ccc6989-20240422173755107000000001",
"policyArn": "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
"role": "piers-role-ccc6989"
},
"inputs": {
"__defaults": [],
"policyArn": "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
"role": "piers-role-ccc6989"
}
},
"response": {
"id": "piers-role-ccc6989-20240422173755107000000001",
"properties": {
"id": "piers-role-ccc6989-20240422173755107000000001",
"policyArn": "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
"role": "piers-role-ccc6989"
},
"inputs": {
"__defaults": [],
"policyArn": "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
"role": "piers-role-ccc6989"
}
}
}
{
"method": "/pulumirpc.ResourceProvider/Read",
"request": {
"id": "piers-role-ccc6989",
"urn": "urn:pulumi:dev::aws-2246::aws:iam/role:Role::piers-role",
"properties": {
"arn": "arn:aws:iam::616138583583:role/piers-role-ccc6989",
"assumeRolePolicy": "{\"Statement\":[{\"Action\":\"sts:AssumeRole\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"ec2.amazonaws.com\"},\"Sid\":\"AllowAssumeRole\"}],\"Version\":\"2012-10-17\"}",
"createDate": "2024-04-22T17:37:54Z",
"description": "",
"forceDetachPolicies": false,
"id": "piers-role-ccc6989",
"inlinePolicies": [],
"managedPolicyArns": [],
"maxSessionDuration": 3600,
"name": "piers-role-ccc6989",
"namePrefix": "",
"path": "/",
"permissionsBoundary": "",
"tags": {},
"tagsAll": {},
"uniqueId": "AROAY65FYVYP6QOMBQ7ZK"
},
"inputs": {
"__defaults": [
"forceDetachPolicies",
"maxSessionDuration",
"name",
"path"
],
"assumeRolePolicy": "{\"Statement\":[{\"Action\":\"sts:AssumeRole\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"ec2.amazonaws.com\"},\"Sid\":\"AllowAssumeRole\"}],\"Version\":\"2012-10-17\"}",
"forceDetachPolicies": false,
"managedPolicyArns": [],
"maxSessionDuration": 3600,
"name": "piers-role-ccc6989",
"path": "/"
}
},
"response": {
"id": "piers-role-ccc6989",
"properties": {
"arn": "arn:aws:iam::616138583583:role/piers-role-ccc6989",
"assumeRolePolicy": "{\"Statement\":[{\"Action\":\"sts:AssumeRole\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"ec2.amazonaws.com\"},\"Sid\":\"AllowAssumeRole\"}],\"Version\":\"2012-10-17\"}",
"createDate": "2024-04-22T17:37:54Z",
"description": "",
"forceDetachPolicies": false,
"id": "piers-role-ccc6989",
"inlinePolicies": [],
"managedPolicyArns": [
"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
],
"maxSessionDuration": 3600,
"name": "piers-role-ccc6989",
"namePrefix": "",
"path": "/",
"permissionsBoundary": "",
"tags": {},
"tagsAll": {},
"uniqueId": "AROAY65FYVYP6QOMBQ7ZK"
},
"inputs": {
"__defaults": [
"forceDetachPolicies",
"maxSessionDuration",
"name",
"path"
],
"assumeRolePolicy": "{\"Statement\":[{\"Action\":\"sts:AssumeRole\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"ec2.amazonaws.com\"},\"Sid\":\"AllowAssumeRole\"}],\"Version\":\"2012-10-17\"}",
"forceDetachPolicies": false,
"managedPolicyArns": [],
"maxSessionDuration": 3600,
"name": "piers-role-ccc6989",
"path": "/"
}
}
}
If they were ordered we could maybe observe during Role read that it's not using managedPolicyArns and then exclude that from the Read result in RolePolicyAttachment.
Thank you for your work on this. My only concern with ignoreChanges and ignoreRefreshChanges is that it would be harder to track if someone had added policies to the role in the AWS Console, but this is probably the best we can get for now. For my use case it is unlikely to happen.
Closing as won't fix.
The recommended workaround is https://github.com/pulumi/pulumi/pull/16015 once it becomes available. In the meanwhile using ignoreChanges is also helpful here even if it does not apply to Refresh changes, as it prevents subsequent Role updates.
We are also looking at extending AWS Guard aka pulumi-policy-aws to add a rule detecting incompatible use of managedPolicyArns and RolePolicyAttachment in https://github.com/pulumi/pulumi-policy-aws/pull/107 - it cannot fully solve the problem as there are example programs that pass the check yet still have a refresh problem, but it can be a great help to save time for programs where it does detect a violation.
Ultimately with https://github.com/pulumi/pulumi-aws/issues/3806 we hope to one day have a recommended way to work with these resources (either inline attributes or sidecar resources), and deprecate the other usage, which will eventually allow us to fix this problem.
What happened?
When I import
aws.iam.Role
or runpulumi up --refresh pulumi
, Pulumi will populatemanagedPolicyArns
even if you useaws.iam.PolicyAttachment
,aws.iam.RolePolicyAttachment
oraws.iam.RolePolicy
to attach the same set of policies. Pulumi warns that you should not use both.It is possible to use a resource transform or setting
managedPolicyArns: []
to removemanagedPolicyArns
but then runningpulumi up --refresh
results in policies being detached and then only maybe being reattached.I asked about this problem on Slack, but the only responses I received were from someone who says that they are only a user and so I still feel that this is a bug. They say that the notes about
managedPolicyArns
are about a user setting it and that conflict does not exist if Pulumi sets it. I have never encountered any other application where the value of a property being set in a way that would cause a problem is not a problem if the user did not explicitly set the property to that value themselves.Steps to reproduce
aws.iam.Role
settingmanagedPolicyArns: []
and useaws.iam.PolicyAttachment
,aws.iam.RolePolicyAttachment
oraws.iam.RolePolicy
to attach the same set of policiespulumi up --refresh pulumi
Expected Behavior
After performing the steps, Pulumi state to continues to have managedPolicyArns is set to the empty set.
Actual Behavior
After performing the steps, Pulumi state has managedPolicyArns is set to contain any policies already in
aws.iam.PolicyAttachment
,aws.iam.RolePolicyAttachment
oraws.iam.RolePolicy
.Output of
pulumi about
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).