hashicorp / terraform-provider-azurerm

Terraform provider for Azure Resource Manager
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
Mozilla Public License 2.0
4.62k stars 4.65k forks source link

Extend SQL Server virtual machine storage pool with additional data disks #13356

Open LaurentLesle opened 3 years ago

LaurentLesle commented 3 years ago

Community Note

Terraform (and AzureRM Provider) Version

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp

Description / Feedback

azurerm_mssql_virtual_machine

The current terraform implementation is matching the behavior on the Azure Portal. To create a storage pool to a SQL VM you need first to create the data disks, set the LUN and then reference the LUN with the operation NEW

resource "azurerm_virtual_machine_data_disk_attachment" "data-disk" {
  managed_disk_id    = azurerm_managed_disk.data_disk.id
  virtual_machine_id = azurerm_windows_virtual_machine.vm_sql.id
  lun                = "2"
  caching            = "ReadWrite"
}

resource "azurerm_mssql_virtual_machine" "sql_setup" {
 *all other parameters, shorted for readability*
   storage_configuration {
    disk_type             = "NEW"
    storage_workload_type = "GENERAL"
    data_settings {
      default_file_path = "F:\\data"
      luns              = [2]
    }
    log_settings {
      default_file_path = "F:\\log"
      luns              = [2]
    } 
}

Using the portal to extend the storage pool, you can see the operations to perform are:

Translated into Terraform that would be


resource "azurerm_virtual_machine_data_disk_attachment" "data-disk-3" {
  managed_disk_id    = azurerm_managed_disk.data_disk.id
  virtual_machine_id = azurerm_windows_virtual_machine.vm_sql.id
  lun                = "3"
  caching            = "ReadWrite"
}
resource "azurerm_virtual_machine_data_disk_attachment" "data-disk-4" {
  managed_disk_id    = azurerm_managed_disk.data_disk.id
  virtual_machine_id = azurerm_windows_virtual_machine.vm_sql.id
  lun                = "4"
  caching            = "ReadWrite"
}

resource "azurerm_mssql_virtual_machine" "sql_setup" {
 *all other parameters, shorted for readability*
   storage_configuration {
    disk_type             = "EXTEND"
    storage_workload_type = "GENERAL"
    data_settings {
      default_file_path = "F:\\data"
      luns              = [3,4]
    }
    log_settings {
      default_file_path = "F:\\log"
      luns              = [3,4]
    } 
}

Causing the error on the apply sqlvirtualmachine.SQLVirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="InvalidArgument" Message="Invalid argument 'DefaultFilePath'."

or

Error InvalidExtendPayload" Message="Invalid Sql Storage Settings Extend Payload. Only support extend one drive at a time.

References

There is bug reported by BassievanAdriaan on stack overflow regarding SQL Server in VM that does not support the storage extension.

https://stackoverflow.com/questions/67580303/extend-disk-sql-server-virtual-machine-by-using-terraform

After reviewing carefully I confirm I am experimenting the same issue.

Looking at the test suite only the ADD new storage configuration is covered, but nothing on extending an existing one.

https://github.com/hashicorp/terraform-provider-azurerm/blob/b0c897055329438be6a3a159f6ffac4e1ce958f2/internal/services/mssql/mssql_virtual_machine_resource_test.go#L620

eissko commented 2 years ago

I've run into similar or same issue. The tf project I got inherited so I don't know the genesis. Anyway, the status quo of terraform state is as following...

The former attribute "storage_configuration" looked like:

"storage_configuration": [
              {
                "data_settings": [
                  {
                    "default_file_path": "F:\\Data",
                    "luns": [
                      1
                    ]
                  }
                ],
                "disk_type": "NEW",
                "log_settings": [
                  {
                    "default_file_path": "G:\\Logs",
                    "luns": [
                      2
                    ]
                  }
                ],
                "storage_workload_type": "OLTP",
                "temp_db_settings": [
                  {
                    "default_file_path": "D:\\Temp",
                    "luns": []
                  }
                ]
              }
            ],

then somebody manually via azure portal extended the data storage and next plan/apply has overridden the state by PUT action like so:

            "storage_configuration": [
              {
                "data_settings": [
                  {
                    "default_file_path": "",
                    "luns": [
                      4
                    ]
                  }
                ],
                "disk_type": "EXTEND",
                "log_settings": [],
                "storage_workload_type": "",
                "temp_db_settings": []
              }
            ],

and now getting the same error as @LaurentLesle described - "Original Error: Code="InvalidArgument" Message="Invalid argument 'DefaultFilePath'."

jwshive commented 1 year ago

Has there been any movement on this in the last year? I'm attempting to use the temp_db_settings and running into the Invalid DefaultFilePath.

eissko commented 1 year ago

Has there been any movement on this in the last year? I'm attempting to use the temp_db_settings and running into the Invalid DefaultFilePath.

I don't think so. I tried to put my effort to this. Created description in detail what is going on on terraform azurerm provider, and Azure api side. But looks like everybody is happy.

For me in this form, the resource is not eligible to be handled by terraform at all. We only deploy once as initial version. Then managing LUNs via azure portal manually. :-/

Peter

khaman-pet commented 10 months ago

Is there any update on this issue?