hashicorp / terraform-provider-azurerm

Terraform provider for Azure Resource Manager
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
Mozilla Public License 2.0
4.6k stars 4.64k forks source link

Support for additionalPortMappings in azurerm_container_app #23442

Open kf6kjg opened 1 year ago

kf6kjg commented 1 year ago

Is there an existing issue for this?

Community Note

Description

This feature is still in preview, but I figured it was worth adding to the queue so that support can be planned for and eventually land in a timely manner.

In API version 2023-05-02-preview Azure has added support for a new block in the ingress block: additionalPortMappings.

This feature request is to track adding support for that new feature at such a time as it becomes released to their main API version.

New or Affected Resource(s)/Data Source(s)

azurerm_container_app

Potential Terraform Configuration

resource "azurerm_container_app" "example" {
  name                         = "example-app"
  container_app_environment_id = azurerm_container_app_environment.example.id
  resource_group_name          = azurerm_resource_group.example.name
  revision_mode                = "Single"

  ingress {
    target_port = 1234
    # The new item:
    additional_port_mapping {
      external_enabled = true
      target_port = 4321
    }
    additional_port_mapping {
      target_port = 2345
    }
  }

  template {
    container {
      name   = "examplecontainerapp"
      image  = "mcr.microsoft.com/azuredocs/containerapps-helloworld:latest"
      cpu    = 0.25
      memory = "0.5Gi"
    }
  }
}

References

https://github.com/microsoft/azure-container-apps/issues/763

rcskosir commented 1 year ago

@kf6kjg Thank you for taking the time to open this feature request!

ZuitAMB commented 9 months ago

According to https://azure.microsoft.com/de-de/updates/generally-available-azure-container-apps-supports-additional-tcp-ports/ it should be generally available now. (However, so far only supported by newest CLI extension)

pabi18 commented 7 months ago

Any updates here?

youyinnn commented 6 months ago

According to https://azure.microsoft.com/de-de/updates/generally-available-azure-container-apps-supports-additional-tcp-ports/ it should be generally available now. (However, so far only supported by newest CLI extension)

I am sorry, but how? How do you configure it with Terraform?

ZuitAMB commented 6 months ago

@youyinnn As fas as I know, it is not possible using Terraform yet. To enable Terraform deployments with this new feature, we probably need a new Azure Container Apps API version by Azure: https://learn.microsoft.com/en-us/azure/templates/microsoft.app/change-log/summary

jsheetzmt commented 6 months ago

additionalPortMappings is available in latest Azure API. https://learn.microsoft.com/en-us/azure/templates/microsoft.app/containerapps?pivots=deployment-language-terraform#ingress-2

ZuitAMB commented 6 months ago

additionalPortMappings is available in latest Azure API.

Unfortunately, latest version is a preview version: 2023-11-02-preview <- latest 2023-08-01-preview 2023-05-02-preview <- introduction of additionalPortMappings 2023-05-01 <- latest non preview version

Hopefully, we get a 2024-0X-XX non-preview version soon

aellwein commented 4 months ago

I've stumbled upon this issue, unfortunately we need this urgently. Can someone help with this?

roisanchezriveira commented 4 months ago

I haven't seen any progress on it, I personally workarounded it using the azapi provider to update the resource JSON definition directly for this (and also for the probes that are defined one way in the portal, and differently in the API). And yes, I've been using the preview version of the API

resource "azurerm_container_app" "container_app" {
  name                         = "ca-example}"
  container_app_environment_id = var.container_app_environment_id
  resource_group_name          = var.container_apps_rg
  revision_mode                = "Single"

  identity {
    type         = "UserAssigned"
    identity_ids = [azurerm_user_assigned_identity.container_app_user.id]
  }

  registry {
    server   = var.registry
    identity = azurerm_user_assigned_identity.container_app_user.id
  }

  dynamic "secret" {
    for_each = local.ca_secrets
    content {
      identity            = azurerm_user_assigned_identity.container_app_user.id
      key_vault_secret_id = secret.value
      name                = secret.key
      value               = null
    }
  }

  template {
    revision_suffix = null
    container {
      name   = "example"
      image  = "${var.registry}/${var.image}"
      cpu    = var.cpu
      memory = var.memory
      dynamic "volume_mounts" {
        for_each = var.storage_mounts
        content {
          name = volume_mounts.key
          path = volume_mounts.value
        }
      }
      dynamic "env" {
        for_each = local.app_env_variables
        content {
          name        = env.key
          value       = env.value
          secret_name = env.key
        }
      }
      liveness_probe {
        failure_count_threshold = 2
        path                    = var.probes["liveness_probe"].path
        initial_delay           = var.probes["liveness_probe"].initial_delay
        interval_seconds        = var.probes["liveness_probe"].period
        port                    = var.probes["liveness_probe"].port
        timeout                 = 1
        transport               = upper(var.probes["liveness_probe"].transport)
      }
      readiness_probe {
        failure_count_threshold = 2
        success_count_threshold = 3
        path                    = var.probes["readiness_probe"].path
        interval_seconds        = var.probes["readiness_probe"].period
        port                    = var.probes["readiness_probe"].port
        timeout                 = 1
        transport               = upper(var.probes["readiness_probe"].transport)
      }
      startup_probe {
        failure_count_threshold = 2
        path                    = var.probes["startup_probe"].path
        interval_seconds        = var.probes["startup_probe"].period
        port                    = var.probes["startup_probe"].port
        timeout                 = 1
        transport               = upper(var.probes["startup_probe"].transport)
      }
    }
    http_scale_rule {
      name                = "http"
      concurrent_requests = 100
    }
    max_replicas = 2
    min_replicas = 1
  }

  ingress {
    allow_insecure_connections = false
    external_enabled           = true
    target_port                = 8080
    traffic_weight {
      percentage      = 100
      latest_revision = true
    }
  }

  lifecycle {
    ignore_changes = [
      template[0].container[0].liveness_probe,
      template[0].container[0].readiness_probe,
      template[0].container[0].startup_probe,
      template[0].container[0].image
    ]
  }
}

# update the container app with extra additionalPortMappings, as this is not supported by the existing TF provider
resource "azapi_update_resource" "container_app_api" {
  type        = "Microsoft.App/containerApps@2023-11-02-preview"
  resource_id = azurerm_container_app.container_app.id

  body = jsonencode({
    properties = {
      configuration = {
        ingress = {
          clientCertificateMode = "Ignore"
          stickySessions = {
            affinity : "none"
          }
          additionalPortMappings = var.additional_ports
        }
      }
      template = {
        containers = [{
          probes = [
            {
              httpGet = {
                path   = var.probes["liveness_probe"].path
                port   = var.probes["liveness_probe"].port
                scheme = upper(var.probes["liveness_probe"].transport)
              }
              initialDelaySeconds = var.probes["liveness_probe"].initial_delay
              periodSeconds       = var.probes["liveness_probe"].period
              type                = "Liveness"
            },
            {
              httpGet = {
                path   = var.probes["readiness_probe"].path
                port   = var.probes["readiness_probe"].port
                scheme = upper(var.probes["readiness_probe"].transport)
              }
              initialDelaySeconds = var.probes["readiness_probe"].initial_delay
              periodSeconds       = var.probes["readiness_probe"].period
              type                = "Readiness"
            },
            {
              httpGet = {
                path   = var.probes["startup_probe"].path
                port   = var.probes["startup_probe"].port
                scheme = upper(var.probes["startup_probe"].transport)
              }
              initialDelaySeconds = var.probes["startup_probe"].initial_delay
              periodSeconds       = var.probes["startup_probe"].period
              type                = "Startup"
            }
          ]
        }]
      }
    }
  })

  depends_on = [
    azurerm_container_app.container_app,
  ]
  lifecycle {
    replace_triggered_by = [azurerm_container_app.container_app]
  }
}
aellwein commented 4 months ago

Thanks, @roisanchezriveira, that definitely helps!

fblampe commented 4 months ago

additionalPortMappings is available in latest Azure API. Hopefully, we get a 2024-0X-XX non-preview version soon

There's a non-preview version 2024-03-01 that supports this feature: https://learn.microsoft.com/en-us/rest/api/containerapps/container-apps/create-or-update?view=rest-containerapps-2024-03-01&tabs=HTTP#create-or-update-container-app

So, is there a chance that this could be added to terraform?

aellwein commented 4 months ago

I found also another unsupported attribute there: template.volumes[].mountOptions. This can be set via Portal but not in Terraform.

kawahara-titan commented 4 months ago

@roisanchezriveira - I have been doing something similar except using azapi_resource_action. We were using Secrets on our container app and because the azapi_update_resource uses a PUT, it apparently uses a GET to retrieve all of the missing attributes. Because the Secrets are not returned in the GET, you end up getting an "ContainerAppSecretInvalid" error.

At any rate, what I wanted to ask you was whether you experience an issue where the Additional Ports become blank after successive Apply steps? I have seemingly been observing that behavior and trying to make sure I'm not crazy.

roisanchezriveira commented 4 months ago

@roisanchezriveira - I have been doing something similar except using azapi_resource_action. We were using Secrets on our container app and because the azapi_update_resource uses a PUT, it apparently uses a GET to retrieve all of the missing attributes. Because the Secrets are not returned in the GET, you end up getting an "ContainerAppSecretInvalid" error.

At any rate, what I wanted to ask you was whether you experience an issue where the Additional Ports become blank after successive Apply steps? I have seemingly been observing that behavior and trying to make sure I'm not crazy.

I had the same issue, that's why I added this to the container app resource

  lifecycle {
    ignore_changes = [
      template[0].container[0].liveness_probe,
      template[0].container[0].readiness_probe,
      template[0].container[0].startup_probe,
    ]
  }

And this to the azapi one

  lifecycle {
    replace_triggered_by = [azurerm_container_app.container_app]
  }

My guess is that the azapi was modifying the container app resource and triggering a modification on subsequent applies (so I added the ignore changes on the probes to avoid that) and that any change on the azurerm resource wipes the azapi, so I added the trigger on it to ensure the ports always are mapped after any other change to the container app

kawahara-titan commented 4 months ago

@roisanchezriveira - I have been doing something similar except using azapi_resource_action. We were using Secrets on our container app and because the azapi_update_resource uses a PUT, it apparently uses a GET to retrieve all of the missing attributes. Because the Secrets are not returned in the GET, you end up getting an "ContainerAppSecretInvalid" error. At any rate, what I wanted to ask you was whether you experience an issue where the Additional Ports become blank after successive Apply steps? I have seemingly been observing that behavior and trying to make sure I'm not crazy.

I had the same issue, that's why I added this to the container app resource

  lifecycle {
    ignore_changes = [
      template[0].container[0].liveness_probe,
      template[0].container[0].readiness_probe,
      template[0].container[0].startup_probe,
    ]
  }

And this to the azapi one

  lifecycle {
    replace_triggered_by = [azurerm_container_app.container_app]
  }

My guess is that the azapi was modifying the container app resource and triggering a modification on subsequent applies (so I added the ignore changes on the probes to avoid that) and that any change on the azurerm resource wipes the azapi, so I added the trigger on it to ensure the ports always are mapped after any other change to the container app

Thanks for confirming. And your solution to just use the replace_triggered_by on the container app is a lot more elegant than what I was considering!