Open phil-bevan opened 5 years ago
Does reproduce also for v1.22.1
The reason is most likely that listing existing storage containers in a storage account directly accesses the storage accounts REST API. It fails if there are firewall rules in place not containing the ip of the host terraform runs on. It works if this IP is added. However, finding that IP is a challenge when terraform is run from Azure Devops as we do. This might not be easy to fix. Maybe storage account firewall rules should be their own resources that need to be added last in a deployment? Or creating a storage container resource first disables the firewall on the storage account and enables it afterwards?
We just ran into this ourselves. Nice to see someone else has already raised the issue with excellent documentation.
The workaround we are testing is to call out to an ARM template for creating the containers. This is not ideal for several reasons:
But it's what we've got. This could be a workaround for you if you need this.
I'm using two parts - a JSON file with the ARM, and a Terraform azurerm_template_deployment
storage-containers.json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"storageAccountName": {
"type": "string"
},
"location": {
"type": "string"
}
},
"resources": [
{
"name": "[parameters('storageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2018-07-01",
"location": "[parameters('location')]",
"resources": [
{
"name": "default/images",
"type": "blobServices/containers",
"apiVersion": "2018-07-01",
"dependsOn": [
"[parameters('storageAccountName')]"
]
},
{
"name": "default/backups",
"type": "blobServices/containers",
"apiVersion": "2018-07-01",
"dependsOn": [
"[parameters('storageAccountName')]"
]
}
]
}
]
}
main.tf
resource "azurerm_storage_account" "standard-storage" {
name = "stdstorage"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
account_tier = "Standard"
account_replication_type = "${var.standard_replication_type}"
enable_blob_encryption = "${var.standard_enable_blob_encryption}"
enable_https_traffic_only = true
network_rules {
ip_rules = "${var.firewall_allow_ips}"
virtual_network_subnet_ids = ["${var.vm_subnet_id}"]
}
}
resource "azurerm_template_deployment" "stdstorage-containers" {
name = "stdstorage-containers"
resource_group_name = "${var.resource_group_name}"
deployment_mode = "Incremental"
depends_on = [
"azurerm_storage_account.standard-storage",
]
parameters {
location = "${var.location}"
storageAccountName = "${azurerm_storage_account.standard-storage.name}"
}
template_body = "${file("${path.module}/storage-containers.json")}"
}
hi @tombuildsstuff , has this issue resolved ? is it related with PR#416
@ranokarno not yet, this issue's open/tracking this bug
I'm hitting a very similar issue except that I'm trying to create a Storage Queue. Otherwise it's very similar from a technical standpoint (using ADO for deployment too).
I hit this bug using terraform 0.12.17 with AzureRM provider 1.37.0 and 1.38.0.
@sschu I am also deploying from Azure DevOps hosted machines. The workaround I created was:
It's workable, but still a pain
I believe I just encountered this with v2.3.0 of the Azure provider. Given that this is still open, I'm assuming it hasn't been fixed?
Hello, It's seems related to this azure-cli issue: https://github.com/Azure/azure-cli/issues/10190
Currently, the creation of a storage container resource (blob, share) seems to use the storage container API which is behind the firewall.
Instead, it should use the Resource Manager provider. In the issue mentionned above, I just discover that az cli has a az storage share-rm create
in addition to existing az storage share create
. I don't know if there is an equivalent for blob, and if this exists in the azure rest API or in terraform :)
Experiencing the same error with Terraform v0.12.25
and azurerm v2.9.0
resource "azurerm_storage_account" "sa_balancesheet_upload" {
name = var.name
resource_group_name = var.resource_group_name
location = var.location
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = var.account_replication_type
enable_https_traffic_only = var.enable_https_traffic_only
network_rules {
default_action = "Deny"
ip_rules = var.ip_rules
}
tags = {
environment = var.environment
}
}
resource "azurerm_storage_container" "sc_balancesheets" {
name = "balancesheets"
storage_account_name = azurerm_storage_account.sa_balancesheet_upload.name
container_access_type = "private"
}
When using Azure DevOps hosted agents to deploy, I ended up writing this piece of Powershell that invokes Azure CLI to allow that specific agent's public IP address to be allowed into the Storage Account that had IP restrictions enabled. Like @jeffadavidson
It's a script you can call as part of your deployments that will toggle the public IP of that agent either on or off (-mode switch).
As mentioned I use it for Azure DevOps pipeline deployments, but it could be used anywhere else by other deployment tools...
<#
.SYNOPSIS
Set (by mode: ON OFF) the Storage Account Firewall Rules by Public IP address. Used by Azure DevOps Build/Release agents
See here : https://github.com/terraform-providers/terraform-provider-azurerm/issues/2977
.DESCRIPTION
Using Azure CLI
.EXAMPLE
.\SetMode_PublicIPAddress_SA.ps1 -storageaccount sa12345random -resourcegroup RG-NDM-TEST -mode on
.NOTES
Written by Neil McAlister - March 2020
#>
param (
[Parameter(Mandatory=$true)]
[string]$storageaccount,
[Parameter(Mandatory=$true)]
[string]$resourcegroup,
[Parameter(Mandatory=$true)]
[string]$mode
)
#
$ip = Invoke-RestMethod http://ipinfo.io/json | Select -exp ip
write-host $ip
#
if ($mode -eq 'on') {
az storage account network-rule add --resource-group $resourcegroup --account-name $storageaccount --ip-address $ip
}
#
if ($mode -eq 'off') {
az storage account network-rule remove --resource-group $resourcegroup --account-name $storageaccount --ip-address $ip
}
I have this with as a step in my deployments with a -mode on that allows access to the SA
I also have another step at the end with -mode off Note that you should run the -mode off step even if your deployment fails/crashes out, otherwise your SA firewall rules are going to get messy with lots of orphaned IP addresses in it.
If you are using YAML based pipelines, that setting is...
condition: always()
...if using GUI based releases it is a setting under ADVANCED options
I have same problem. I cannot create container from my workstation. I enabled virtual_network_subnet_ids (for application backend) and ip_rules for my workstation (running TF scripts). I'm getting Status=403 Code="AuthorizationFailure"
Same error in terraform v0.2.12
hi, Is there any plan to fix this ?
Reiterating @boillodmanuel 's comment about using the resource manager API instead of the storage account API which is behind the firewall. This isn't just for the create request either. Terraform refreshing the storage container properties also fails if the network rules prevent it. There is a resource manager API available that can be used instead, eg using the Azure CLI tool:
β ~ az storage container-rm list --storage-account myStorageAccount
Command group 'storage container-rm' is in preview. It may be changed/removed in a future release.
[
{
"defaultEncryptionScope": "$account-encryption-key",
"deleted": false,
"deletedTime": null,
"denyEncryptionScopeOverride": false,
"etag": "\"0x8D8720723A5BBDF\"",
"hasImmutabilityPolicy": false,
"hasLegalHold": false,
"id": "/subscriptions/<subscription id>/resourceGroups/my-resource-group/providers/Microsoft.Storage/storageAccounts/myStorageAccount/blobServices/default/containers/myContainer",
"immutabilityPolicy": null,
"lastModifiedTime": "2020-10-16T19:17:37+00:00",
"leaseDuration": null,
"leaseState": "Available",
"leaseStatus": "Unlocked",
"legalHold": null,
"metadata": null,
"name": "myStorageContainer",
"publicAccess": "None",
"remainingRetentionDays": 0,
"resourceGroup": "my-resource-group",
"type": "Microsoft.Storage/storageAccounts/blobServices/containers",
"version": null
}
]
This was run on a storage account/container that had network rules preventing me from accessing the storage container through the storage account API.
experiencing same issue doing the following:
export TF_VAR_ip_rules='["<MY_IP>"]'
Error: Error retrieving Azure Storage Account "myst": storage.AccountsClient#GetProperties: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="StorageAccountNotFound" Message="The storage account myst was not found."
terraform {
required_version = "= 0.14.9"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 2.54"
}
}
}
resource "azurerm_storage_account" "st" {
name = "${replace(var.name_prefix, "-", "")}${var.short_storage_name}st"
resource_group_name = var.resource_group_name
location = var.location
account_tier = "Standard"
account_replication_type = "GRS"
min_tls_version = "TLS1_2"
is_hns_enabled = var.is_hns_enabled
network_rules {
bypass = ["AzureServices"]
default_action = "Deny"
virtual_network_subnet_ids = var.virtual_network_subnet_ids
ip_rules = var.ip_rules
}
tags = var.tags
}
Hi, I've had a read through https://github.com/terraform-providers/terraform-provider-azurerm/pull/9314 and noted there was a dependency on an upstream Storage API change before being able to improve this behaviour in the azurerm
terraform provider. Is there an update on how far those changes have progressed and when we expect the terraform provider to be able to make use of those upstream changes?
Does anyone know if there is an open TF bug for deploying a Premium tier Storage account with network rules?
Whenever I tried to deploy a premium tier SA using TF module it give a 403 error
This issue caught my eye while looking through SA related issues for a different issue. I've encountered similar issues to the OP. My troubleshooting revealed the following learning:
The MS workaround for the above is to make sure you are using self-hosted Azure DevOps agents. A better fix imho is to make the MS hosted agents ALWAYS go via their public IP, even when the SA is the same Region as the agent.
A bonus fix from MS would be to allow Azure firewalls to recognise Azure DevOps agents as another trusted Azure Service.
In my case when I run a pipeline for second time I got the 403 error. It only work if I change the firewall rule to allow all networks.
Since this is only a problem of the container/filesystem resources, I am using an ARM template as a replacement for that. Code is quite simple:
resource "random_id" "randomId" {
byte_length = 6
}
resource "azurerm_template_deployment" "container" {
count = var.account.file_systems
depends_on = [ azurerm_storage_account.account ]
name = "${azurerm_storage_account.account.name}-container-${random_id.randomId.hex}"
resource_group_name = var.resource_group_name
deployment_mode = "Incremental"
template_body = file("${path.module}/container.json")
parameters = {
storage_account_name = azurerm_storage_account.account.name
container_name = var.account.file_systems[count.index].container_name
}
}
With a container.json
file in the same folder:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"storage_account_name": {
"type": "string"
},
"container_name": {
"type": "string"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Storage/storageAccounts/blobServices/containers",
"name": "[concat(parameters('storage_account_name'), '/default/', parameters('container_name'))]",
"apiVersion": "2021-02-01",
"properties": {}
}
]
}
@bergmeister I also converted into Arm deployment and that resolved the issue for me. But I hated this approach to be honest.
It's not great I admit but for most people good enough. The question is more like how often are container names really renamed in real life and features like ACLs aren't working that great in terraform yet anyway, so apart from having to clean up containers on renames, it's not too bad or too complex to maintain.
I had this same issue where I created a Premium SKU File Share with Terraform 1.0.2 on Azure, but when I locked it down to a VNET and Public IPs my Build Agents got 403 not authorized. If I built locally from my workstation it would work, but even with the public IP of my self hosted build agents for Azure Devops, it still failed.
Then I found this mentioned on the resource documentation - https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_account_network_rules#ip_rules
IP network rules have no effect on requests originating from the same Azure region as the storage account. Use Virtual network rules to allow same-region requests. Services deployed in the same region as the storage account use private Azure IP addresses for communication. Thus, you cannot restrict access to specific Azure services based on their public outbound IP address range.
So that means for me anyway, as my build agents were in the same region in Azure as the file share, they were getting the internal IP, not public. To fix I added the build VM vnet into the allowed virtual networks on the file share and now it works fine.
Is this still reproducible?
Most likely it is. Fundamental reason is the terraform provider uses a mixture of control plane and data plane to set things up. And operations on the data plane are affected by network rules set up in the control plane.
Is this still reproducible?
I just ran into this yesterday.
Is this still reproducible?
Yes, also just ran into this today.
The whole point of having an API to spin up resources in the cloud is to be able to do this from anywhere and the resources themselves are restricted. I am bewildered by the fact that it appears the Azure API to interact with storage shares are subject to the network restrictions of the storage account.
resource "azurerm_storage_account" "example" {
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
name = "example"
account_kind = "FileStorage"
account_tier = "Premium"
account_replication_type = "LRS"
enable_https_traffic_only = false
}
resource "azurerm_storage_account_network_rules" "example" {
depends_on = [azurerm_storage_share.example]
storage_account_id = azurerm_storage_account.example.id
default_action = "Deny"
virtual_network_subnet_ids = [azurerm_subnet.example.id]
# AZURE LIMITATION:
# interactions with storage shares inside a storage account through the Azure API are subject to these restrictions?
# ...so all future executions of Terraform break if one doesn't poke oneself a hole for wherever we are running Terraform from
// ip_rules = [chomp(data.http.myip.body)]
}
// data "http" "myip" {
// url = "http://icanhazip.com"
// }
resource "azurerm_storage_share" "example" {
name = "example-storage-share"
storage_account_name = azurerm_storage_account.example.name
enabled_protocol = "NFS"
}
Otherwise:
β Error: shares.Client#GetProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailure" Message="This request is not authorized to perform this operation.\nRequestId:XXXXXXX-YYYY-YYYY-YYYY-ZZZZZZZZZZZZ\nTime:2021-11-10T18:07:08.8135873Z"
β
β with azurerm_storage_share.example,
β on storage.tf line 47, in resource "azurerm_storage_share" "example":
β 47: resource "azurerm_storage_share" "example" {
Resolution of PR #14220 will fix this.
@andyr8939 comment is the correct fix until #14220 is implemented.
This solution ("add the vnet to the network rules") worked for us
While adding the vnet to the network rules is a solution, this routes your traffic over the public internet, which is not as ideal as having completely private storage.
An alternative solution that I am investigating using is private endpoints.
When you create a private endpoint for a storage container, a private DNS zone for the storage is created and associated to a vnet where you put the endpoint. This allows the resources in that vnet to resolve the azure storage IP as a private IP, so connections from that vnet will traverse the microsoft backbone properly instead of going over the public internet.
This will go around any network rules you put on the storage because network rules only apply to the public IP, so you would create private endpoints for a vnet that needs access, and then you can either peer other vnets to that one, or create new endpoints for the other vnets to prevent sharing resources between the vnets.
In Terraform, this does require that you initially create an azurerm_storage_account
resource without a network_rules
block, then create an azurerm_private_dns_zone
resource, an azurerm_private_dns_zone_virtual_network_link
resource, an azurerm_private_endpoint
resource, and then apply the network rules using the azurerm_storage_account_network_rules
resource.
I went through our terraform storage code and refactored it to leverage private endpoints, and I removed the vnet from the network rules in the process of doing that in order to confirm it is really using the private endpoint.
It works beautifully but there are some caveats.
enforce_private_link_endpoint_network_policies
in the azurerm_subnet
resource. Despite the name of the argument, it must be set to true
in order to allow private endpoints to be created.
enforce_private_link_service_network_policies
which you do not need to change for this. Ensure you set the one with "endpoint" in the argument name if you are trying to create private endpoints for storage, event hubs, etc.azurerm_private_endpoint
resource with the resource ID of your azurerm_storage_account
. This means you CANNOT define your network_rules
block inside the azurerm_storage_account
resource, but instead must create the storage account without network rules, then create a Private DNS Zone, followed by 1-2 private endpoints, followed by applying network rules via the azurerm_storage_account_network_rules
resource, and finally creating your azurerm_storage_container
.
I found some sample code here https://www.debugaftercoffee.com/blog/using-terraform-with-private-link-enabled-services and adapted it to my needs. Additional samples are found here: https://github.com/hashicorp/terraform-provider-azurerm/tree/main/examples/private-endpoint and I found that the example in the private-dns-group
subdirectory of the second link was most helpful in getting the DNS zone group configured properly for my storage resources.
I hope this helps. Let me know if anyone has questions.
@tspearconquest thank you for the explanation, I wonder if you defined azurerm_private_endpoint
, azurerm_storage_account
(without network rules) and azurerm_storage_account_network_rules
in on terraform run? and does terraform build its dependencies correct in this case or do you have to add depends_on
statements manually?
Hi @TheKangaroo Yes they can all be defined in a single .tf file and created in a single run, but the network rules must be defined as separate resource from the storage account; meaning you can't include the network rules block in the storage account resource.
It should build the dependencies correctly based on the resource IDs being included, however I chose in my code to explicitly define them in order to make certain that diagnostic settings for my AKS cluster are not created until the storage container is created.
My diagnostic settings don't explicitly depend upon the storage, but rather we use a tool running in the cluster to extract the audit logs from an event hub and that tool itself is what requires the storage. So the implicit dependency is not known to Terraform and for that reason is why I chose to define the dependencies explicitly.
Management-plane APIs are here: https://docs.microsoft.com/en-us/python/api/azure-mgmt-storage/azure.mgmt.storage.storagemanagementclient?view=azure-python.
Providing another workaround based on the azapi provider:
resource "azapi_resource" "test" {
name = "acctestmgd"
parent_id = "${azurerm_storage_account.test.id}/blobServices/default"
type = "Microsoft.Storage/storageAccounts/blobServices/containers@2021-04-01"
body = "{}"
}
It should build the dependencies correctly based on the resource IDs being included, however I chose in my code to explicitly define them in order to make certain that diagnostic settings for my AKS cluster are not created until the storage container is created.
My diagnostic settings don't explicitly depend upon the storage, but rather we use a tool running in the cluster to extract the audit logs from an event hub and that tool itself is what requires the storage. So the implicit dependency is not known to Terraform and for that reason is why I chose to define the dependencies explicitly.
Hello tspearconquest,
I tried the approach you recommended, but am running into issues i.e. the private endpoint is not being utilized, can you please validate and let me know if am missing anything:
The issue is that on re-running the terraform apply for the datalake, it is unable to access the containers (getting the 403 error) i.e. it does not seem to be using the private endpoint created using the process above.
Hi @VR99
Yes that sounds like the correct steps. I'm afraid I don't have experience with Data Lake, as we only use Event Hubs and Log Analytics. Can you share some of the non working code? Maybe I can spot something in there.
I did have an issue with connecting to the storage from my corporate VPN when I was testing this all out. It turns out our VPN was missing a peering, so we got that added and I was able to connect by IP. Unfortunately, the network team was hesitant to setup any private DNS endpoints on the VPN subnet, so even with the peering in place my laptop was still resolving the public DNS. So I've resorted to editing my hosts file to allow me to connect to the storage over the VPN instead of the internet.
My point in mentioning this is that it could simply be a DNS issue where the private IPs are not being resolved, so I would start looking at that angle and the easiest way to try to rule out connectivity is probably by connecting directly to the private IP of the data lake from your build agent.
If you can't connect to the private IP, then it's network/firewall related, and if you can, then it's DNS related. Hope that helps :)
Hello, It's seems related to this azure-cli issue: Azure/azure-cli#10190
Currently, the creation of a storage container resource (blob, share) seems to use the storage container API which is behind the firewall. Instead, it should use the Resource Manager provider. In the issue mentionned above, I just discover that az cli has a
az storage share-rm create
in addition to existingaz storage share create
. I don't know if there is an equivalent for blob, and if this exists in the azure rest API or in terraform :)
@boillodmanuel 's comment above nails it. I am facing problem #8922 (which is sadly and imho incorrectly closed, can s/o please reopen it? It is a severe problem..) just like @ahmddp who also comes to the same conclusion as @boillodmanuel above.
Enabling public access to the storage account enables file share creation via TF which confirms the above analysis however that's for sure not an option for a productive share. Looks like I need to comment the share resource for now to enable further apply runs (or better refreshes).
In comments such as https://github.com/hashicorp/terraform-provider-azurerm/issues/17341#issuecomment-1162907877, Tom references a ticket in the backlog on the Azure side to fix this.
Is there any public visibility of this ticket? It would let us put pressure on Azure via our account rep to fix this.
Thank you!
It would let us put pressure on Azure via our account rep to fix this.
Point the account rep at this thread. Then send them this link which shows ~133 issues currently blocked by MS upstream.
Reporting in... this is still an issue. Blob containers created with Terraform are not accessible thru the portal.
Code I am using to make a storage account, followed by a container inside of it:
resource "azurerm_storage_account" "tfstate" {
name = var.storageaccountname
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "deploy" {
name = "deploy"
storage_account_name = azurerm_storage_account.tfstate.name
container_access_type = "blob"
}
Hi @ryanberger-az, can you try to add a private endpoint? Details are in my previous comment from Jan 12: https://github.com/hashicorp/terraform-provider-azurerm/issues/2977#issuecomment-1011183736
@tspearconquest Thank you for the suggestion, but that workaround will not meet the requirements for which I am developing this solution. This will be deployed into our customers cloud tenants, and their networking teams are going to NOT be okay with private endpoints being set up for the sake of doing this. What is weird is that I tried another route of doing this, and used an ARM template to be deployed within my terraform file and I still get the same results on the containers it's making. Sort of at a loss because I guess I thought this may get around it. I guess since it's using the azurerm provider, it's still facing the same issues though.
resource "azurerm_resource_group_template_deployment" "deploy" {
deployment_mode = "Incremental"
resource_group_name = azurerm_resource_group.rg.name
name = "deploy-with-arm"
depends_on = [
azurerm_storage_account.tfstate
]
parameters_content = jsonencode({
"location" = {
value = "${var.resource_group_location}"
}
"storageAccountName" = {
value = "${azurerm_storage_account.tfstate.name}"
}
})
template_content = <<TEMPLATE
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"storageAccountName": {
"type": "string"
},
"location": {
"type": "string"
}
},
"resources": [
{
"name": "[parameters('storageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2018-07-01",
"location": "[parameters('location')]",
"resources": [
{
"name": "default/deploy",
"type": "blobServices/containers",
"apiVersion": "2018-07-01",
"dependsOn": [
"[parameters('storageAccountName')]"
]
}
]
}
]
}
TEMPLATE
@tspearconquest Thank you for the suggestion, but that workaround will not meet the requirements for which I am developing this solution. This will be deployed into our customers cloud tenants, and their networking teams are going to NOT be okay with private endpoints being set up for the sake of doing this. What is weird is that I tried another route of doing this, and used an ARM template to be deployed within my terraform file and I still get the same results on the containers it's making. Sort of at a loss because I guess I thought this may get around it. I guess since it's using the azurerm provider, it's still facing the same issues though.
Understood. Yes, that seems to be the case. I noticed your comment a while back about the provider talking with an API that gets firewalled off by creating network rules.
The terraform provider seems to be behind on this, so while I have heard people say that they normally don't recommend it, another option that you could try is to setup a local exec provisioner with a null resource that calls the Azure CLI to take care of creating resources after the firewall is enabled. Since the CLI has a way to create it using the Resource Manager API, it should work, though I have not tried it myself.
@tspearconquest Haha, great minds! I edited out my comment where I was talking about going to go down that route of using the local-exec provider to run Azure-CLI commands to build this stuff out. The issue here is that it looks like you need to run an az login before you can actually run the commands.
I may just write an azure CLI script to have our customer run to create the storage account/container for us to use to enable remote state for the bigger terraform deployment we'll be rolling into their environments. The whole reason for this was using terraform to build a storage account/container to be used in a subsequent terraform deployment for the remote state. I wanted to build out the entire deployment with terraform, but in this state, it is not possible.
I appreciate all of your effort onto this subject. I hope that the provider can be resolved soon and that Microsoft wakes-up to this being a real issue.
Happy to help. It certainly is! az login
with a service principal or user assigned managed identity would be the way I'd go whether using a script or terraform. ;)
On aspect that seems to be unaddressed is the 'preview' of the nfsv3_enabled option. That requires the network default action be 'deny' (aka use network_rules) and for that to work, the network_rules must be in the azurerm_storage_account block. With those two requirements, you are painted into the same corner with a little more pain if you are trying to use a private_endpoint.
Community Note
Terraform (and AzureRM Provider) Version
Terraform v0.11.11
Affected Resource(s)
azurerm_storage_account
azurerm_storage_container
Terraform Configuration Files
Debug Output
terraform apply