Open kavinkvb opened 10 months ago
Same here with azurerm provider 3.69.0 and 3.70.0.
It looks like Microsoft preponed the deprecation of retention policies. When trying to set up the diagnostic settings in the portal I get the following error:
Storage retention via diagnostic settings is being deprecated and new rules can no longer be configured. To maintain your existing retention rules please migrate to Azure Storage Lifecycle Management by September 30th 2025. [What do I need to do?](https://go.microsoft.com/fwlink/?linkid=2243231)
September 30, 2023 ā You will no longer be able to use the API (CLI, Powershell, or templates), or Azure portal to configure retention setting unless you're changing them to 0. Existing retention rules will still be respected.
Created a support case to clarify with MS. I'll keep you posted.
First response from the support team:
The below retention days no more available . instead of that , the configuration should be on the storage account itself (destination) by Lifecycle Management of the storage account . [...] I would recommend to remove the retention policy segment, or at least set the retention to false and with a "days" value of 0 template and try again.
I removed the "days" from the implementation, and it worked. The retention days is set in the target resource here, so it will result in the same, we have log analytics workspace here also, like you have, and I see you have the retention days defined in it as well, so it should be the quick fix.
Thanks, I also got the issue yesterday. Glad that I can find this on internet. log { category = "StorageDelete" enabled = true
retention_policy {
enabled = true
days = 365 >> I will change to 0
}
}
I found this: - https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy
If I put this to 0 that means unlimited, cost of my LA will increase. Unless my LA automatically deletes as per retention set at LA.
Same issue started 23rd Aug.23
Previous:
dynamic "log" {
for_each = data.azurerm_monitor_diagnostic_categories.aks_diag_cat.log_category_types
content {
category = log.value
enabled = true
retention_policy {
days = 30
enabled = true
}
}
New (working):
dynamic "log" {
for_each = data.azurerm_monitor_diagnostic_categories.aks_diag_cat.log_category_types
content {
category = log.value
enabled = true
}
}
Can someone kindly clarify that if we are targeting log analytic workspace when enabling Diagnostic setting do we need to specify the "retention_policy" block as it is only relevant when targeting storage account.
i'm experiencing the same issues when trying to create the diagnostic setting for Postgresql and keyvault. While previously we had no problem deploying with the retention policy set.
I have started seeing this issue today, while trying to deploy resources to Azure using Terraform.
Error: creating Monitor Diagnostics Setting "xyx-diag-setting" for Resource "xyz-keyvault": diagnosticsettings.DiagnosticSettingsClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Diagnostic settings does not support retention for new diagnostic settings."
Deprecated dates look incorrect.
This is happening to me only when creating new resources. Trying to stick to the azurerm 3.70.0 that worked before
im seeing a similar issue on 3.67.0, which is completely new.
i removed the retention rules & now can apply.
This is happening to me only when creating new resources. Trying to stick to the azurerm 3.70.0 that worked before
Unfortunately the version 3.70.0 produce the same error. It looks something else changed on the backend
Same issue here ... If you got something as a workaround ... Because it is not possible for us to disable completely the retention rule because of the cost and we have a lot of resources & logs & environments ... It is really blocking us ... š To be honest, i don't understand how those breaking changes are managed in terms of communication .... Maybe, i missed something but it's clearly not clear enough ...
same issue here, going to remove from my code like was mentioned above. You can set log retention in the log analytics workspace which is probably what Microsoft is going to recommend people to do anyways going forward.
same issue here, going to remove from my code like was mentioned above. You can set log retention in the log analytics workspace which is probably what Microsoft is going to recommend people to do anyways going forward.
This is exactly what i'm doing right now... The problem that i have on my side is if you have different retention period depending of the log category for the same resource, i don't find a way to be able to be granular like before ....
Removing seems to re-introduce the perpetual changes as previously posted
Does someone have the terraform code for adding the lifecycle management rule into a storage account to add the retention policy?
Does someone have the terraform code for adding the lifecycle management rule into a storage account to add the retention policy?
Please have a look at https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_management_policy.html.
If we have the logs and metrics sent to the event hub. Do we need to have the retention_policy enabled?
Same issue here with setting diagnostic settings to a log analytics workspace. The documentation only describes the deprecation of retention policy when the logs are sent to a storage account.
@Annesars90 From my side, i managed the retention policy at log analytics workspace level and remove every retention period at the diag settings level in the code. I deleted every diag settings to recreate them from scratch with the new config and add the retention period for the log analytics. The only thing that i don't know for now is to have the same granularity as before...
I removed the "days" from the implementation, and it worked. The retention days is set in the target resource here, so it will result in the same, we have log analytics workspace here also, like you have, and I see you have the retention days defined in it as well, so it should be the quick fix.
This is actually a problem when you are keeping all your logs in a centralized log analytics workspace... I want different retention values for different environments but still keep all my logs in the same workspace..
I just want to share with someone who will stumble upon this issue how to temporarily fix it in case you need to redeploy your environment from ground zero.
So we had a pipeline with almost the exact same configuration:
# With retention_policy
resource "azurerm_monitor_diagnostic_setting" "aks_logs" {
count = var.enable_log_analytics_workspace ? 1 : 0
name = var.cluster_name
target_resource_id = azurerm_kubernetes_cluster.main.id
log_analytics_workspace_id = azurerm_log_analytics_workspace.main[0].id
log {
category = "kube-apiserver"
enabled = true
retention_policy {
enabled = true
days = var.log_retention_in_days
}
}
}
The pipeline and the infra were created a year ago. This configuration allows you to deploy because there are no changes. But if you need to recreate this component, it will be an error.
In order to bypass this, all you have to do is to remove the retention_policy
block and deploy it. After that, return the retention_policy
block and redeploy it.
# Wihtout
resource "azurerm_monitor_diagnostic_setting" "aks_logs" {
count = var.enable_log_analytics_workspace ? 1 : 0
name = var.cluster_name
target_resource_id = azurerm_kubernetes_cluster.main.id
log_analytics_workspace_id = azurerm_log_analytics_workspace.main[0].id
log {
category = "kube-apiserver"
enabled = true
}
}
It will not fix the issue in the future, but at least it will postpone it for you in case you need to overcome this blocking behavior as soon as possible.
Is there an existing issue for this?
Community Note
Terraform Version
0.13.4
AzureRM Provider Version
3.68.0
Affected Resource(s)/Data Source(s)
azurerm_monitor_diagnostic_setting
Terraform Configuration Files
Debug Output/Panic Output
Expected Behaviour
The diagnostic setting needs to be worked as expected
Actual Behaviour
No response
Steps to Reproduce
No response
Important Factoids
No response
References
No response