Closed johhess40 closed 1 year ago
Also just had the same issue with creating Diagnostic Settings for Azure AD which seems to be a problem addressed in https://github.com/Azure/azure-rest-api-specs/issues/11085.
Hi @johhess40, thanks for reporting this issue. Could you provide a minimal, but complete, configuration that will help us to reproduce the error you are seeing? I believe I understand what is being done in the sample you already provided but it would be helpful to know which role you are trying to assign, the target resource in Terraform that it's being assigned for (presumably a management group?) and the roles assigned to the calling service principal - we can use that as a starting point to try and pinpoint the source of the error.
There were some notable changes to role assignments in v2.62.0 (#11848) I believe to support cross tenant delegation, which could be related, but it will be much easier to track down with a reproducible config. Thanks!
@manicminer no problem at all, the role assignment that it is failing to read is assigned to an Azure Policy with DeployIfNotExists action type. The policy ensures that if a particular resource is deployed that diagnostic settings will be automatically sent to a log analytics resource. The policy is assigned at the management group level and when it is assigned it creates a managed identity which is then used to do the template deployment. The service principal that is used for this has Management Group Contributor, Resource Policy Contributor, and Security Admin at the Tenant Root Level. It has all necessary permissions at the underlying Management Groups as well. These permissions were sufficient in v2.61.0 so my suspicion is that something that was changed in v2.62.0 and onward created the bug.
locals {
policyFiles = fileset(path.module, "./customPolicies/*.json")
policyData = [for f in local.policyFiles : jsondecode(file("${path.module}/${f}"))]
}
resource "azurerm_policy_definition" "diagnosticcustompolicy" {
for_each = { for f in local.policyData : f.properties["displayName"] => f }
name = each.value["name"]
policy_type = each.value.properties["policyType"]
mode = each.value.properties["mode"]
display_name = each.value.properties["displayName"]
management_group_name = var.policyDefManagementGroupName
metadata = jsonencode({
category = each.value.properties["metadata"].category
})
policy_rule = jsonencode((each.value.properties["policyRule"]))
parameters = jsonencode((each.value.properties["parameters"]))
}
resource "azurerm_policy_set_definition" "diagnosticcustompolicyset" {
for_each = var.custompolicySetDefs
name = each.value["policySetDefName"]
policy_type = each.value["policyType"]
display_name = each.value["policySetDisplayName"]
description = each.value["policySetDescription"]
management_group_name = each.value["managementGroupName"]
metadata = jsonencode({
category = each.value["policySetCategory"],
version = each.value["policySetVersion"],
source = each.value["policySetSource"]
})
dynamic "policy_definition_reference" {
for_each = each.value["policyDefRefs"]
content {
policy_definition_id = policy_definition_reference.value["policyId"]
reference_id = policy_definition_reference.value["displayName"]
parameter_values = <<VALUE
{
"diagnosticsSettingNameToUse": {
"value": "${policy_definition_reference.value["diagnosticsSettingNameToUse"]}"
},
"logAnalytics": {
"value": "${policy_definition_reference.value["logAnalytics"]}"
}
}
VALUE
}
}
depends_on = [
azurerm_policy_definition.diagnosticcustompolicy
]
}
resource "azurerm_policy_assignment" "diagnosticcustomassignment" {
for_each = var.custompolicyAssignments
name = each.value["assignmentName"]
scope = each.value["assignmentScope"]
policy_definition_id = each.value["policySetDefId"]
description = each.value["assignmentDescription"]
display_name = each.value["assignmentDisplayName"]
location = each.value["assignmentLocation"]
identity {
type = "SystemAssigned"
}
enforcement_mode = true
metadata = jsonencode({
category = each.value["policyAssignmentCategory"],
version = each.value["policyAssignmentVersion"],
source = each.value["policyAssignmentSource"]
})
depends_on = [
azurerm_policy_set_definition.diagnosticcustompolicyset
]
}
resource "azurerm_role_assignment" "policyrole" {
for_each = var.policyRoleAssignments
scope = each.value["roleScope"]
role_definition_name = each.value["roleDefinitionName"]
// Insert whatever you name the object for the assignment in the square brackets below
principal_id = azurerm_policy_assignment.diagnosticcustomassignment[each.value["assignmentReference"]].identity[0].principal_id
depends_on = [
azurerm_policy_assignment.diagnosticcustomassignment
]
}
This module creates policy definitions, adds them to a definition set, and then assigns the policy definition at the management group level. The role assignment is needed because otherwise Azure complains that the managed identity doesn't have the right permissions(This is a known bug for assigning DeployIfNotExists through code vs portal).
I've noticed that in the provider versions after v2.61.0 all apppend an extra portion of the management group resource ID to the scope for the role assignment
'/providers/Microsoft.Management/managementGroups/providers/Microsoft.Management/managementgroups/dd3becbb-0539-4e49-b2a6-eb8e06100771/providers/Microsoft.Authorization/roleAssignments
@manicminer any update on this? I still can't use current versions of the provider because of this issue. I'd be happy to try and help where I can if possible, I've been diving deep into learning Golang this year and would love to pitch in!
Thanks for opening this issue. This was a problem in the 2.x version of the provider which is no longer actively maintained. If this is still an issue with the 3.x version of the provider please do let us know by opening a new issue, thanks!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Terraform Version
Terraform Configuration Files
Debug Output
Crash Output
Expected Behavior
The plan should have executed without error, this works fine with 2.61.0 azurerm provider
Actual Behavior
Status=403 Code="AuthorizationFailed" Message="The client '' with object id '' does not have authorization to perform action 'Microsoft.Management/managementGroups/Microsoft.Management/**/Microsoft.Authorization/6e890381-2066-2ae7-a053-7ac4cd0c8722/read' over scope '/providers/Microsoft.Management/managementGroups/providers/Microsoft.Management/managementgroups//providers/Microsoft.Authorization/roleAssignments' or the scope is invalid. If access was recently granted, please refresh your credentials."
Steps to Reproduce
Please list the full steps required to reproduce the issue, for example:
terraform init
terraform apply
Additional Context
The azurerm provider 2.61.0 has no issues with my service principal but since upgrading to 2.64.0 I get the above error. I also get this error with 2.62.0 and 2.63.0. There is a role assignment that is created for a deployifnotexists Azure Policy which is the source of the error. Update: Can confirm that even though service principal has role of 'Owner' (which is not a best practice but was done merely for testing) still cannot successfully execute the planned deployment. This is definitely not a permissions issue for the SP.
References