microsoft / azure-container-apps

Roadmap and issues for Azure Container Apps
MIT License
362 stars 29 forks source link

Roadmap: Multiple ports support #763

Open torosent opened 1 year ago

torosent commented 1 year ago

8/30/2023 Public Preview: https://azure.microsoft.com/en-us/updates/public-preview-azure-container-apps-supports-additional-tcp-ports/ Docs: https://aka.ms/aca/additional-tcp-ports

Justrebl commented 1 year ago

Don't need more than just this title : Hyped 💯 😄

dhilgarth commented 1 year ago

Awesome, thanks for listening!

joaquinvacas commented 1 year ago

Waiting for this, trying to migrate my actual SMTP relay to Container Apps but this breaks the whole process.

ATM made it working by having two instances of SMTP, one for port 25 and another one for 465 sharing the volume. Not the best practice, but the one that works.

Phiph commented 1 year ago

Looking forward to this! Thank you team!

riccardopinosio commented 1 year ago

This is currently a massive blocker for ACA usage in my opinion, as there's a ton of services that require multiple exposed ports. Really looking forward for this to be implemented.

sebastian-hans-swm commented 1 year ago

I'm also really looking forward to be able to do remote debugging of my web applications.

elruss commented 1 year ago

Slightly confused here...are we talking about multiple ports exposed externally, or internally, or both?

My use case is Selenium Grid, where I need a "hub" container to have the only externally available ingress port for its management console. But separate node/worker Container Apps in the same environment need to be able to consume an event queue on the hub where the publish and subscribe ports are different.

So, need a container app with one ingress port and two "internal" ports.

ahmelsayed commented 1 year ago

Slightly confused here...are we talking about multiple ports exposed externally, or internally, or both?

Both. Each additional port mapping will have its own external/internal state ref.

dvdr00t commented 1 year ago

Very hyped for this! 🚀 Any updates on the work so far? Is there an estimated release date?

torosent commented 1 year ago

We are working on the docs but you can use it now with api version 2023-05-02-preview under ingress section

"additionalPortMappings": [
              {
                "external": true,
                "targetPort": 1234
              },
              {
                "external": false,
                "targetPort": 2345,
                "exposedPort": 3456
              }
            ]
elglogins commented 1 year ago

What is the reason of having a custom VNET as a requirement? To expose multiple external http ports?

drapka commented 1 year ago

I am sorry, I am just starting with Azure and I am completely lost about updating existing ACA to add additionalPortMappings config using CLI. I am using az containerapp update -g groupName -n appName --yaml <yamlPath>

The yaml contains the following section:

    ingress:
      external: true
      transport: Tcp
      allowInsecure: false
      targetPort: 10000
      exposedPort: 10000
      additionalPortMappings:
        - external: true
          targetPort: 1001
          exposesdPort: 1001

But it still gives me error: Bad Request({"type":"https://tools.ietf.org/html/rfc7231#section-6.5.1","title":"One or more validation errors occurred.","status":400,"traceId":"00-f4f96e97b9de893a2316e4f101410e53-685b10742c4f49c5-01","errors":{"$":["Unknown properties additionalPortMappings in Microsoft.ContainerApps.WebApi.Views.Version20230401Preview.ContainerAppIngress are not supported"]}})

I believe I have the latest version of the CLI extension, specifying the preview apiVersion at the top of the yaml file seems to have no effect.

When I check the details of the container app via az containerapp show command I can already see the new property additionalPortMapping, which is obviously set to null.

Thanks for any help.

ahmelsayed commented 1 year ago

What is the reason of having a custom VNET as a requirement? To expose multiple external http ports?

@elglogins additional ports are all TCP. External http ports are 80/443 only.

ahmelsayed commented 1 year ago

@drapka the cli hasn't been updated yet to use that preview api version so it won't work there.

If you want to live dangerously :) you can put this in a patch.json file

{
  "properties": {
    "configuration": {
      "ingress": {
        "external": true,
        "transport": "Tcp",
        "targetPort": 10000,
        "exposedPort": 10000,
        "additionalPortMappings": [
          {
            "external": true,
            "targetPort": 1001,
            "exposedPort": 1001
          }
        ]
      }
    }
  }
}

then do

# get your app's full resource id
ID=$(az containerapp show -n appName -g groupName -o tsv --query id)

# patch the app using patch.json and api-version=2023-05-02-preview
az rest \
  --method patch \
  --body @patch.json \
  --url "${ID}?api-version=2023-05-02-preview"

# verify the property is there
az rest \
  --method get \
  --url "${ID}?api-version=2023-05-02-preview" | \
  jq -r '.properties.configuration.ingress'
gcrockenberg commented 1 year ago

What if the transport for the second port is different. For example, gRPC for integration between microservices?

ahmelsayed commented 1 year ago

What if the transport for the second port is different. For example, gRPC for integration between microservices?

@gcrockenberg All additional ports are TCP. So any tcp based protocol (like htt2/grpc) should work.

tiwood commented 1 year ago

@ahmelsayed, if I understood this correctly this won't allow us to expose UDP ports? If so, are they any plans to introduce this?

We have a use case where we must expose a service over udp.

ahmelsayed commented 1 year ago

Correct, we don't have UDP. Can't speak of any plans myself.

simonkurtz-MSFT commented 1 year ago

Hi @torosent and @ahmelsayed,

First, thank you for this feature!

Is this still in development or is it in preview now as the documentation indicates? Do you have a rough ETA for GA?

https://learn.microsoft.com/en-us/azure/container-apps/ingress-overview#additional-tcp-ports

ahmelsayed commented 1 year ago

It's in preview now. You can use it through ARM/bicep (api-version=2023-05-02-preview) or the cli with --yaml option. Here is a bicep sample https://github.com/ahmelsayed/bicep-templates/blob/main/aca-app-multiple-ports.bicep#L20-L36

with --yaml in the cli, add the following to your ingress

additionalPortMappings:
  - external: false
    targetPort: 9090
    exposedPort: 9090
  - external: false
    targetPort: 22
    exposedPort: 22
gcrockenberg commented 1 year ago

I just tried running a bicep "what-if" with that preview and did not see the additionalPortMapping applied. I didn't see an error just didn't see the port mapping either.

RocheVal commented 1 year ago

Thank for this feature !

I tried it with the cli and the --yaml option but it doesn't seem to work.

Like @gcrockenberg I didn't have any errors but the additional ports are not accessible. And the additionalPortMappings is actually displayed in the result of the cli command with the good value in it. But it's the only place I saw it (but because it's in preview it doesn't surprise me).

And for information the "main" HTTP port is working correctly.

simonkurtz-MSFT commented 1 year ago

@chadray, you just worked on this and got it working with external: true, right? I'm not sure I fully understand that property yet, but @gcrockenberg and @RocheVal, it may be worth exploring.

howang-ms commented 1 year ago

@chadray, you just worked on this and got it working with external: true, right? I'm not sure I fully understand that property yet, but @gcrockenberg and @RocheVal, it may be worth exploring.

The external property indicates whether the app port is accessible outside of the environment.

RocheVal commented 1 year ago

Yes, I used the external: true to access it from outside.

In fact, in my yaml, my ingress look like this.

ingress:
      allowInsecure: true
      external: true
      targetPort: 15672
      transport: http
      additionalPortMappings:
        - external: true
          exposedPort: 5672
          targetPort: 5672

I want to configure 2 external port. One for HTTP and one for TCP.

gcrockenberg commented 1 year ago

I’ll test it though my intent if for the additional port to be used internally between containers. I do not want to expose it externally.

Thank you

From: Valentin @.> Sent: Tuesday, August 29, 2023 9:34 AM To: @.> Cc: @.>; @.> Subject: Re: [microsoft/azure-container-apps] Roadmap: Multiple ports support (Issue #763)

Yes, I used the external: true to access it from outside.

In fact, in my yaml, my ingress look like this.

ingress:

  allowInsecure: true

  external: true

  targetPort: 15672

  transport: http

  additionalPortMappings:

    - external: true

      exposedPort: 5672

      targetPort: 5672

I want to configure 2 external port. One for HTTP and one for TCP.

— Reply to this email directly, view it on GitHubhttps://github.com/microsoft/azure-container-apps/issues/763#issuecomment-1697457494, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAPVKC3IPWJDK5IP777SRVLXXXVVVANCNFSM6AAAAAAYUHNJLM. You are receiving this because you were mentioned.Message ID: @.***>

dummy-andra commented 1 year ago

api-version=2023-05-02-preview

@ahmelsayed I tried the patching solution, in my case it fails with:

Bad Request({"error":{"code":"ContainerAppSecretInvalid","message":"Invalid Request: Container app secret(s) with name(s) 'reg-pswd-f60c7731-bf65' are invalid: value or keyVaultUrl and identity should be provided."}}).

jlkardas commented 1 year ago

Happy to see the feature is in preview!

I'm running a few container apps with an additional port specified on each one so I can facilitate gRPC between containers. My container app env is also deployed to a custom vnet. When I execute a grpcurl request within one container to another, I receive a successful response when addressing the internal IP address of the container, (e.g. grpcurl -plaintext 10.5.0.37:443 list). However, I cannot get a successful response when addressing the container by its host name + DNS suffix (e.g. my-service.<unique-identifier>.<location>.azurecontainerapps.io:443).

The additional ports are all external. Is there something I am missing?

additionalPorts:
        - external: true
          targetPort: 443
          exposedPort: 443
pizerg commented 1 year ago

Trying to deploy a container app with 4 external tcp ports using CLI + yml, the deployment completes just fine but the provision status is stuck in "Provisioning". The only information available is the following log from the Sytem Logs stream:

{"TimeStamp":"2023-09-21T08:58:32Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"Connecting to the events collector...","Reason":"StartingGettingEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T08:58:35Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"Successfully connected to events server","Reason":"ConnectedToEventsServer","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21 08:56:47 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Deactivating old revisions for ContainerApp \u0027[APP_NAME_HERE]\u0027","Reason":"RevisionDeactivating","EventSource":"ContainerAppController","Count":2}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"","Msg":"Successfully provisioned revision \[APP_NAME_HERE]--z00g9fu\u0027","Reason":"RevisionReady","EventSource":"ContainerAppController","Count":3}
{"TimeStamp":"2023-09-21 08:56:47 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Successfully updated containerApp: [APP_NAME_HERE]","Reason":"ContainerAppReady","EventSource":"ContainerAppController","Count":2}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Updating containerApp: [APP_NAME_HERE]","Reason":"ContainerAppUpdate","EventSource":"ContainerAppController","Count":11}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"","Msg":"Updating revision : [APP_NAME_HERE]--z00g9fu","Reason":"RevisionUpdate","EventSource":"ContainerAppController","Count":10}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Setting traffic weight of \u0027100%\u0027 for revision \u0027[APP_NAME_HERE]--z00g9fu\u0027","Reason":"RevisionUpdate","EventSource":"ContainerAppController","Count":3}
{"TimeStamp":"2023-09-21 08:56:46 \u002B0000 UTC","Type":"Warning","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"0/2 nodes are available: 2 node(s) didn\u0027t match Pod\u0027s node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.","Reason":"AssigningReplicaFailed","EventSource":"ContainerAppController","Count":0}
{"TimeStamp":"2023-09-21 08:56:53 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"pod didn\u0027t trigger scale-up: 3 node(s) didn\u0027t match Pod\u0027s node affinity/selector","Reason":"NotTriggerScaleUp","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T08:59:35Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T09:00:36Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T09:01:36Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21 09:01:49 \u002B0000 UTC","Type":"Warning","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"0/2 nodes are available: 2 node(s) didn\u0027t match Pod\u0027s node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.","Reason":"AssigningReplicaFailed","EventSource":"ContainerAppController","Count":0}
{"TimeStamp":"2023-09-21 09:01:55 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"pod didn\u0027t trigger scale-up: 3 node(s) didn\u0027t match Pod\u0027s node affinity/selector","Reason":"NotTriggerScaleUp","EventSource":"ContainerAppController","Count":31}
{"TimeStamp":"2023-09-21T09:02:55Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}

The following YAML was used to create the app:

location: westeurope
name: [APP_NAME_HERE]
properties:
  configuration:
    activeRevisionsMode: Single
    secrets:
      [SOME_SECRETS_HERE]
    registries:
      [ACR_SETTINGS_HERE]
    ingress:
      transport: tcp
      allowInsecure: false
      exposedPort: 9100
      targetPort: 9100
      external: true
      additionalPortMappings:
      - exposedPort: 9200
        targetPort: 9200
        external: true
      - exposedPort: 9300
        targetPort: 9300
        external: true
      - exposedPort: 9400
        targetPort: 9400
        external: true
      traffic:
      - latestRevision: true
        weight: 100
  managedEnvironmentId: [ENV_ID_HERE]
  template:
    containers:
    - image: [ACR_NAME_HERE]/[IMAGE_NAME_HERE]:[IMAGE_REVISION_HERE]
      name: [IMAGE_NAME_HERE]
      resources:
        cpu: 0.25
        memory: 0.5Gi
      env:
     [SOME_ENV_REFERENCING_SECRETS]
    scale:
      maxReplicas: 1
      minReplicas: 1
  workloadProfileName: Consumption
type: Microsoft.App/containerApps

After a while, status changes to Provisioned but Running Status becomes "Degraded" and no replica is actually running

pizerg commented 1 year ago

@jlkardas

Have you tried changing the port to other than 443?

roxana-muresan commented 11 months ago

Hello together!

When is this feature gonna be generaly available?

Best regards

zhenqxuMSFT commented 11 months ago

Happy to see the feature is in preview!

I'm running a few container apps with an additional port specified on each one so I can facilitate gRPC between containers. My container app env is also deployed to a custom vnet. When I execute a grpcurl request within one container to another, I receive a successful response when addressing the internal IP address of the container, (e.g. grpcurl -plaintext 10.5.0.37:443 list). However, I cannot get a successful response when addressing the container by its host name + DNS suffix (e.g. my-service.<unique-identifier>.<location>.azurecontainerapps.io:443).

The additional ports are all external. Is there something I am missing?

additionalPorts:
        - external: true
          targetPort: 443
          exposedPort: 443

@jlkardas hostname+DNS suffix only supports ports with HTTP transport. You need to use \<app name>:\<port> for additional ports.

zhenqxuMSFT commented 11 months ago

api-version=2023-05-02-preview

@ahmelsayed I tried the patching solution, in my case it fails with:

Bad Request({"error":{"code":"ContainerAppSecretInvalid","message":"Invalid Request: Container app secret(s) with name(s) 'reg-pswd-f60c7731-bf65' are invalid: value or keyVaultUrl and identity should be provided."}}).

@dummy-andra looks like your secret was not configured correctly in the payload. Make sure you have provide the secret value if it's not a key vault secret.

zhenqxuMSFT commented 11 months ago

Trying to deploy a container app with 4 external tcp ports using CLI + yml, the deployment completes just fine but the provision status is stuck in "Provisioning". The only information available is the following log from the Sytem Logs stream:

{"TimeStamp":"2023-09-21T08:58:32Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"Connecting to the events collector...","Reason":"StartingGettingEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T08:58:35Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"Successfully connected to events server","Reason":"ConnectedToEventsServer","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21 08:56:47 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Deactivating old revisions for ContainerApp \u0027[APP_NAME_HERE]\u0027","Reason":"RevisionDeactivating","EventSource":"ContainerAppController","Count":2}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"","Msg":"Successfully provisioned revision \[APP_NAME_HERE]--z00g9fu\u0027","Reason":"RevisionReady","EventSource":"ContainerAppController","Count":3}
{"TimeStamp":"2023-09-21 08:56:47 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Successfully updated containerApp: [APP_NAME_HERE]","Reason":"ContainerAppReady","EventSource":"ContainerAppController","Count":2}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Updating containerApp: [APP_NAME_HERE]","Reason":"ContainerAppUpdate","EventSource":"ContainerAppController","Count":11}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"","Msg":"Updating revision : [APP_NAME_HERE]--z00g9fu","Reason":"RevisionUpdate","EventSource":"ContainerAppController","Count":10}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Setting traffic weight of \u0027100%\u0027 for revision \u0027[APP_NAME_HERE]--z00g9fu\u0027","Reason":"RevisionUpdate","EventSource":"ContainerAppController","Count":3}
{"TimeStamp":"2023-09-21 08:56:46 \u002B0000 UTC","Type":"Warning","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"0/2 nodes are available: 2 node(s) didn\u0027t match Pod\u0027s node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.","Reason":"AssigningReplicaFailed","EventSource":"ContainerAppController","Count":0}
{"TimeStamp":"2023-09-21 08:56:53 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"pod didn\u0027t trigger scale-up: 3 node(s) didn\u0027t match Pod\u0027s node affinity/selector","Reason":"NotTriggerScaleUp","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T08:59:35Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T09:00:36Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T09:01:36Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21 09:01:49 \u002B0000 UTC","Type":"Warning","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"0/2 nodes are available: 2 node(s) didn\u0027t match Pod\u0027s node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.","Reason":"AssigningReplicaFailed","EventSource":"ContainerAppController","Count":0}
{"TimeStamp":"2023-09-21 09:01:55 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"pod didn\u0027t trigger scale-up: 3 node(s) didn\u0027t match Pod\u0027s node affinity/selector","Reason":"NotTriggerScaleUp","EventSource":"ContainerAppController","Count":31}
{"TimeStamp":"2023-09-21T09:02:55Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}

The following YAML was used to create the app:

location: westeurope
name: [APP_NAME_HERE]
properties:
  configuration:
    activeRevisionsMode: Single
    secrets:
      [SOME_SECRETS_HERE]
    registries:
      [ACR_SETTINGS_HERE]
    ingress:
      transport: tcp
      allowInsecure: false
      exposedPort: 9100
      targetPort: 9100
      external: true
      additionalPortMappings:
      - exposedPort: 9200
        targetPort: 9200
        external: true
      - exposedPort: 9300
        targetPort: 9300
        external: true
      - exposedPort: 9400
        targetPort: 9400
        external: true
      traffic:
      - latestRevision: true
        weight: 100
  managedEnvironmentId: [ENV_ID_HERE]
  template:
    containers:
    - image: [ACR_NAME_HERE]/[IMAGE_NAME_HERE]:[IMAGE_REVISION_HERE]
      name: [IMAGE_NAME_HERE]
      resources:
        cpu: 0.25
        memory: 0.5Gi
      env:
     [SOME_ENV_REFERENCING_SECRETS]
    scale:
      maxReplicas: 1
      minReplicas: 1
  workloadProfileName: Consumption
type: Microsoft.App/containerApps

After a while, status changes to Provisioned but Running Status becomes "Degraded" and no replica is actually running

@pizerg this issue is related to additional ports. Are you still hitting the same issue now?

dummy-andra commented 11 months ago

api-version=2023-05-02-preview

@ahmelsayed I tried the patching solution, in my case it fails with: Bad Request({"error":{"code":"ContainerAppSecretInvalid","message":"Invalid Request: Container app secret(s) with name(s) 'reg-pswd-f60c7731-bf65' are invalid: value or keyVaultUrl and identity should be provided."}}).

@dummy-andra looks like your secret was not configured correctly in the payload. Make sure you have provide the secret value if it's not a key vault secret.

This secret reg-pswd-f60c7731-bf65 was NOT Configured by me.

When deploying container app with ACR, it automatically add ACR password to ACA Secrets and the reg-pswd-xxxxxx is the name of the secret automatically generated upon the app's creation

Anyway I recreated ACA via Bicep and worked since updating it with the patch did not worked as expected.

pizerg commented 11 months ago

Trying to deploy a container app with 4 external tcp ports using CLI + yml, the deployment completes just fine but the provision status is stuck in "Provisioning". The only information available is the following log from the Sytem Logs stream:

{"TimeStamp":"2023-09-21T08:58:32Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"Connecting to the events collector...","Reason":"StartingGettingEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T08:58:35Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"Successfully connected to events server","Reason":"ConnectedToEventsServer","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21 08:56:47 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Deactivating old revisions for ContainerApp \u0027[APP_NAME_HERE]\u0027","Reason":"RevisionDeactivating","EventSource":"ContainerAppController","Count":2}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"","Msg":"Successfully provisioned revision \[APP_NAME_HERE]--z00g9fu\u0027","Reason":"RevisionReady","EventSource":"ContainerAppController","Count":3}
{"TimeStamp":"2023-09-21 08:56:47 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Successfully updated containerApp: [APP_NAME_HERE]","Reason":"ContainerAppReady","EventSource":"ContainerAppController","Count":2}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Updating containerApp: [APP_NAME_HERE]","Reason":"ContainerAppUpdate","EventSource":"ContainerAppController","Count":11}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"","Msg":"Updating revision : [APP_NAME_HERE]--z00g9fu","Reason":"RevisionUpdate","EventSource":"ContainerAppController","Count":10}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Setting traffic weight of \u0027100%\u0027 for revision \u0027[APP_NAME_HERE]--z00g9fu\u0027","Reason":"RevisionUpdate","EventSource":"ContainerAppController","Count":3}
{"TimeStamp":"2023-09-21 08:56:46 \u002B0000 UTC","Type":"Warning","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"0/2 nodes are available: 2 node(s) didn\u0027t match Pod\u0027s node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.","Reason":"AssigningReplicaFailed","EventSource":"ContainerAppController","Count":0}
{"TimeStamp":"2023-09-21 08:56:53 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"pod didn\u0027t trigger scale-up: 3 node(s) didn\u0027t match Pod\u0027s node affinity/selector","Reason":"NotTriggerScaleUp","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T08:59:35Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T09:00:36Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T09:01:36Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21 09:01:49 \u002B0000 UTC","Type":"Warning","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"0/2 nodes are available: 2 node(s) didn\u0027t match Pod\u0027s node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.","Reason":"AssigningReplicaFailed","EventSource":"ContainerAppController","Count":0}
{"TimeStamp":"2023-09-21 09:01:55 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"pod didn\u0027t trigger scale-up: 3 node(s) didn\u0027t match Pod\u0027s node affinity/selector","Reason":"NotTriggerScaleUp","EventSource":"ContainerAppController","Count":31}
{"TimeStamp":"2023-09-21T09:02:55Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}

The following YAML was used to create the app:

location: westeurope
name: [APP_NAME_HERE]
properties:
  configuration:
    activeRevisionsMode: Single
    secrets:
      [SOME_SECRETS_HERE]
    registries:
      [ACR_SETTINGS_HERE]
    ingress:
      transport: tcp
      allowInsecure: false
      exposedPort: 9100
      targetPort: 9100
      external: true
      additionalPortMappings:
      - exposedPort: 9200
        targetPort: 9200
        external: true
      - exposedPort: 9300
        targetPort: 9300
        external: true
      - exposedPort: 9400
        targetPort: 9400
        external: true
      traffic:
      - latestRevision: true
        weight: 100
  managedEnvironmentId: [ENV_ID_HERE]
  template:
    containers:
    - image: [ACR_NAME_HERE]/[IMAGE_NAME_HERE]:[IMAGE_REVISION_HERE]
      name: [IMAGE_NAME_HERE]
      resources:
        cpu: 0.25
        memory: 0.5Gi
      env:
     [SOME_ENV_REFERENCING_SECRETS]
    scale:
      maxReplicas: 1
      minReplicas: 1
  workloadProfileName: Consumption
type: Microsoft.App/containerApps

After a while, status changes to Provisioned but Running Status becomes "Degraded" and no replica is actually running

@pizerg this issue is related to additional ports. Are you still hitting the same issue now?

@zhenqxuMSFT I opened a support request with the Azure team and they are investigating this, as far as I know, last week the issue was still happening, however I managed to find a workaround, after the initial deploy fails as described in my initial message, I just need to create a new revision (using the portal for example) and the new revision is deployed correctly and fully functional, including the 4 tcp ports defined in the original failed deployment

cforce commented 11 months ago

@zhenqxuMSFT Please support "transport":"http" for additional ports as well. If we only get tcp we have to start to offload tls in the app , that is very ugly. We want that we can manage (custsom) certs in azure not in our app, also because of load balancing etc

It works over additional port (no ssl, we would need setup extra ssl inside the app) It does not work over main port with ssl managed by ACA - Is this expected to be a bug?

pizerg commented 10 months ago

@zhenqxuMSFT

It seems that the official Azure DevOps pipeline is breaking the additional ports configuration of an existing app that uses this feature (no ingress settings specified in the pipeline stage "Azure Container Apps Deploy" version 1.*) and only keeping the main port active.

jlkardas commented 9 months ago

@zhenqxuMSFT

It seems that the official Azure DevOps pipeline is breaking the additional settings configuration of an existing app that uses this feature (no ingress settings specified in the pipeline stage "Azure Container Apps Deploy" version 1.*) and only keeping the main port active.

Also experiencing this issue

zhenqxuMSFT commented 9 months ago

@cforce Thanks for the feedback, noted.

It does not work over main port with ssl managed by ACA - Is this expected to be a bug?

Did you mean custom domain is not working for you? Could you elaborate more?

zhenqxuMSFT commented 9 months ago

@pizerg @jlkardas do you have any name of container app names and timeframes I can take a look? Or if it's possible for you to provide some steps for me to repro the issue?

RocheVal commented 9 months ago

I had to put this subject aside but now I'm back on it. And when I tried to reuse my "old" yaml file the additional ports are not working (like when I tried few months ago). And if I look at configuration details I don't see any of the additional ports (contrary to few months ago where I could see the additional ports in the config).

I used the workaround proposed by @ahmelsayed, and I updated my app through the API, to add additional ports and it worked.

So it seems it's working only through API.

I will continue to use it and tell you if I face other issues

zhenqxuMSFT commented 9 months ago

@RocheVal could you upgrade to latest cli and try with --yaml again? If that still not work, could you provide the output of cli command with --debug option to acasupport at microsoft dot com and we will take a look at the issue ASAP.

RocheVal commented 9 months ago

I updated az cli to 2.55.0 and the results is the same (app created but additional ports not working).

I sent an email to acasupport@microsoft.com with the output of cli with --debug.

Juliehzl commented 9 months ago

@RocheVal could you install containerapp cli extension with az extension add -n containerapp and then try again? Only GA feature will be in azure cli core and all preview features will be in containerapp extension.

RocheVal commented 9 months ago

Thank you it's working correctly with the extension containerapp.

I didn't see in it the docs, I don't know if it wasn't specified or if I didn't read correctly.

But thank again it's working like expected now !

pizerg commented 9 months ago

@zhenqxuMSFT

@pizerg @jlkardas do you have any name of container app names and timeframes I can take a look? Or if it's possible for you to provide some steps for me to repro the issue?

If you contact me privately I could provide the required information, otherwise the repro steps are quite straightforward, just deploy any container app with additional ports (in our case 1 main tcp port and 3 additional tcp ports running in a consumption environment). After checking all ports work correctly, deploy a new revision using the official Azure DevOps' Container App Deploy pipeline (just leave ingress setting empty) and you should see that only the main port is working after that

jlkardas commented 9 months ago

@zhenqxuMSFT

@pizerg @jlkardas do you have any name of container app names and timeframes I can take a look? Or if it's possible for you to provide some steps for me to repro the issue?

If you contact me privately I could provide the required information, otherwise the repro steps are quite straightforward, just deploy any container app with additional ports (in our case 1 main tcp port and 3 additional tcp ports running in a consumption environment). After checking all ports work correctly, deploy a new revision using the official Azure DevOps' Container App Deploy pipeline (just leave ingress setting empty) and you should see that only the main port is working after that

Similarly, if you could provide me with an email address I would be more than happy to provide some documentation for our ACA and release pipeline setup, or whatever required information you may need.

pizerg commented 8 months ago

@zhenqxuMSFT Any update on the issue related to Azure Pipelines ?

Wycliffe-nml commented 6 months ago

Hi There

We have a setup of 3 Rabbit MQ Container apps running Rabbit MQ Alpine image from docker hub We have added an etra port for AMQP and the other port is used for health checks via TCP 1 of the containers is being used on our QA environment and the other 2 are not yet used All three are in different resource groups

We have an issue where the containers keep creating new replicas for no aparrent reason, when they do they will all create new replicas on the same day for no apparent reason. We have made sure the containers have 4 gig ram and other resources are fine. The revisions stay the same but why are new replicas getting created? scaling is set to 1 - 1

We would like to use this setup in production but are not sure how to fix the auto creation of replicas The health of the replicas is Running (at max)

Please assist