Open tspearconquest opened 2 years ago
Hi @tspearconquest Disk Access feature requires updating Managed Disk network_access_policy
to AllowPrivate
as well, when destroying azurerm_disk_access
resource with managed_disk_ids
, Terraform is not able to decide whether to update network_access_policy
to AllowAll
or DenyAll
.
I could think of a work around, maybe you can check if this could help in your case. (It requires latest az package with the --disk-access fix)
provider "null" {
}
resource "azurerm_linux_virtual_machine" "example" {
...
}
resource "azurerm_disk_access" "example" {
name = "yicma-disk-access-0"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
}
# Use a null_resource to call az command to set disk access on the disk, only trigger when vm id is changed
resource "null_resource" "update_disk_access" {
triggers = {
id = azurerm_linux_virtual_machine.example.id
}
provisioner "local-exec" {
command = "az disk update --resource-group ${azurerm_linux_virtual_machine.example.resource_group_name} --name ${azurerm_linux_virtual_machine.example.os_disk[0].name} --network-access-policy AllowPrivate --disk-access ${azurerm_disk_access.example.id}"
}
}
Thank you for the workaround. I actually don't have a VM I can test this against at the moment, but it seems like this would work fine.
I'm not sure I understand the issue with why it can't be done by terraform though. When a disk access is removed, the network policy should no longer apply and so it should revert to the default AllowAll, right? If you set it to DenyAll and have no Disk Access, is it still possible to mount the disk in the VM?
What about creating a separated network access policy resource for disks, similar to what azurerm_storage_account_network_rules
offers for storage accounts?
Something like azurerm_managed_disk_network_rules
which can take an argument to decide how to set the network rule on destroy. Just spitballing ideas.
Hi @tspearconquest as far as I know, disk access controls only import/export and does not affect attaching to VM, please correct me if I'm wrong.
However after looking at azurerm_storage_account_network_rules
, I think it should be helpful to have a similar resource for managed disk, since there is no way to configure the DiskAccess for the OS Disk of azurerm_linux_virtual_machine
or azurerm_windows_virtual_machine
. Below is the potential configuration code I've written based on idea.
@tombuildsstuff Do you think if this can fit into our current design of virtual machine/disk resource?
resource "azurerm_linux_virtual_machine" "example" {
...
}
resource "azurerm_disk_access" "example" {
...
}
// New resource type to configure the disk access for os disk of azurerm_linux_virtual_machine
resource "azurerm_managed_disk_network_access_policy" "example" {
managed_disk_id = azurerm_linux_virtual_machine.example.os_disk[0].id // Requires adding os disk id to vm attribute
network_access_policy = "AllowPrivate"
disk_access_id = azurerm_disk_access.example.id
}
Thanks, I think you are right about disk access affecting import and export only. However, it's not clear from the Azure documentation that I've seen if the network access policy is the same or if that also impacts a VM trying to access the disk.
I will try to test this and get back to you.
From: Yichun Ma @.>
Sent: Saturday, January 29, 2022 1:32:15 AM
To: hashicorp/terraform-provider-azurerm @.>
Cc: Thomas Spear @.>; Mention @.>
Subject: Re: [hashicorp/terraform-provider-azurerm] Support for specifying an existing disk to attach an azurerm_disk_access
resource to (Issue #15156)
CAUTION: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
Hi @tspearconquesthttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Ftspearconquest&data=04%7C01%7Ctspear%40conquestcyber.com%7C4d22868302a841e8220408d9e2f97982%7C8c73de7821b84aa8885ecb22616eb322%7C0%7C0%7C637790383421882953%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=DF6vYOxdH67g1WdmmhpL3V1MLkCaS5Afwx97LA5EdQA%3D&reserved=0 as far as I know, disk access controls only import/export and does not affect attaching to VM, please correct me if I'm wrong.
However after looking at azurerm_storage_account_network_rules, I think it should be helpful to have a similar resource for managed disk, since there is no way to configure the DiskAccess for the OS Disk of azurerm_linux_virtual_machine or azurerm_windows_virtual_machine. Below is the potential configuration code I've written based on idea. @tombuildsstuffhttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Ftombuildsstuff&data=04%7C01%7Ctspear%40conquestcyber.com%7C4d22868302a841e8220408d9e2f97982%7C8c73de7821b84aa8885ecb22616eb322%7C0%7C0%7C637790383421882953%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2BDo4uApss%2FRFUux2YvYFrVLloFR%2BZjOiBJ%2FKZ9htyHM%3D&reserved=0 Do you think if this can fit into our current design of virtual machine/disk resource?
resource "azurerm_linux_virtual_machine" "example" { ... }
resource "azurerm_disk_access" "example" { ... }
// New resource type to configure the disk access for os disk of azurerm_linux_virtual_machine resource "azurerm_managed_disk_network_access_policy" "example" { managed_disk_id = azurerm_linux_virtual_machine.example.os_disk[0].id // Requires adding os disk id to vm attribute network_access_policy = "AllowPrivate" disk_access_id = azurerm_disk_access.example.id }
β Reply to this email directly, view it on GitHubhttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhashicorp%2Fterraform-provider-azurerm%2Fissues%2F15156%23issuecomment-1024856773&data=04%7C01%7Ctspear%40conquestcyber.com%7C4d22868302a841e8220408d9e2f97982%7C8c73de7821b84aa8885ecb22616eb322%7C0%7C0%7C637790383421882953%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2FJQN0hY0CW%2BEAnPT7yoQ8nTUJcr4FPRUxOZldwMBcdA%3D&reserved=0, or unsubscribehttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FATRTFZ7SJD537TVJEISY4Y3UYOJX7ANCNFSM5M7QR4QA&data=04%7C01%7Ctspear%40conquestcyber.com%7C4d22868302a841e8220408d9e2f97982%7C8c73de7821b84aa8885ecb22616eb322%7C0%7C0%7C637790383421882953%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=MCdlC9Xth%2FyrL5uM764IL4mw7Mnrbbjl7aC1xQZwPJ4%3D&reserved=0. Triage notifications on the go with GitHub Mobile for iOShttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fapps.apple.com%2Fapp%2Fapple-store%2Fid1477376905%3Fct%3Dnotification-email%26mt%3D8%26pt%3D524675&data=04%7C01%7Ctspear%40conquestcyber.com%7C4d22868302a841e8220408d9e2f97982%7C8c73de7821b84aa8885ecb22616eb322%7C0%7C0%7C637790383421882953%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=mK0q%2F8U%2Bq1QQtKBtSNfo0bQ1PgPSxQs%2BI1Hg50JWUKY%3D&reserved=0 or Androidhttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fplay.google.com%2Fstore%2Fapps%2Fdetails%3Fid%3Dcom.github.android%26referrer%3Dutm_campaign%253Dnotification-email%2526utm_medium%253Demail%2526utm_source%253Dgithub&data=04%7C01%7Ctspear%40conquestcyber.com%7C4d22868302a841e8220408d9e2f97982%7C8c73de7821b84aa8885ecb22616eb322%7C0%7C0%7C637790383421882953%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=Xgw%2F9ZHvinUvDCaJKCD0RS1yROzRb8ngxD9E2JYqc5o%3D&reserved=0. You are receiving this because you were mentioned.Message ID: @.***>
@myc2h6o this doesn't make sense as a separate resource - it'd need to either be supported by the azurerm_virtual_machine_data_disk_attachment
resource or as a part of #6117
Presumably this can only be configured for Data Disks and not the OS Disk, since sharing an OS Disk likely would cause other issues?
@tombuildsstuff Data disk shall be fine in this case, as network_access_policy
can be configured on the separate disk resource. However, for OS disk, as we don't support creating a VM using an existing disk id (only FromImage
is supported), the OS disk of the VM actually has the default network access policy as AllowAll
which allows sharing the OS disk.
From below comments seems like creating from an existing OS disk is not recommended any more. However, to allow a user restrict the network access policy for OS disk at creation time, do you think we can add the above resource? Or do you think it worth a new feature request on the API side to let this be configurable in compute.OSDisk?
Found another issue #8195 for using an existing OS disk with managed_disk_id
. It should be able to give the ability to specify the disk access on an existing disk then use it to create a VM.
Hi,
I'm facing exactly the same issue as everyone. From my perspective, adding a specific resource like "azurerm_managed_disk_network_access_policy" or adding options in current "azurerm_windows_virtual_machine" os_disk settings seems the easiest way to write the configuration.
Hi @tspearconquest as far as I know, disk access controls only import/export and does not affect attaching to VM, please correct me if I'm wrong.
However after looking at
azurerm_storage_account_network_rules
, I think it should be helpful to have a similar resource for managed disk, since there is no way to configure the DiskAccess for the OS Disk ofazurerm_linux_virtual_machine
orazurerm_windows_virtual_machine
. Below is the potential configuration code I've written based on idea. @tombuildsstuff Do you think if this can fit into our current design of virtual machine/disk resource?resource "azurerm_linux_virtual_machine" "example" { ... } resource "azurerm_disk_access" "example" { ... } // New resource type to configure the disk access for os disk of azurerm_linux_virtual_machine resource "azurerm_managed_disk_network_access_policy" "example" { managed_disk_id = azurerm_linux_virtual_machine.example.os_disk[0].id // Requires adding os disk id to vm attribute network_access_policy = "AllowPrivate" disk_access_id = azurerm_disk_access.example.id }
Did we find a simple solution to set the network_access_policy
to either AllowAll
, AllowPrivate
or DenyAll
for the OS disk of a VM?
Would it be possible to get an update on when this can be added please. I just got pinged on an audit for having the osdisk default to AllowAll, thanks
@segraef @davepattie another commenter posted in https://github.com/Azure/azure-rest-api-specs/issues/21325#issuecomment-1818836432 with a workaround using AzApi provider instead of AzureRM provider. Give the code in this comment a try.
@segraef @davepattie another commenter posted in Azure/azure-rest-api-specs#21325 (comment) with a workaround using AzApi provider instead of AzureRM provider. Give the code in this comment a try.
Thanks @tspearconquest we use AzApi (since there is no other way) but it would be good to understand if on this is being worked on by the provider team.
Community Note
Description
We would like to request support in
azurerm_disk_access
for specifying an existingazurerm_managed_disk
resource ID.This would allow for attaching a newly created disk access resource to an existing Managed Disk which may have been created by an
azurerm_linux_virtual_machine
orazurerm_windows_virtual_machine
resource.The
azurerm_linux_virtual_machine
resource would need to export the disk IDs in Terraform for this to work, but then we could simply create a disk access resource and provide that an input with the disk IDs exported from the VM resource.It might also be nice to have support for creating a disk access in terraform inside either of the above mentioned virtual machine blocks as well so that one can be created during the VM's creation and deleted when the VM is deleted.
New or Affected Resource(s)
Potential Terraform Configuration
The below example
azurerm_linux_virtual_machine
resource would create a disk access as part of the VM creation, and connect it to all disks being created and connected to that VM.References
0000