Closed pamelafox closed 1 year ago
route to CXP team
@pamelafox Thanks for reaching out to us and reporting this issue. Could you please run the same command with --debug switch and share the detailed debug output here ? This will help us to assist you better.
@pamelafox I wanted to do quick follow-up to check if you had a chance to look at my above comment. Please let us know if you had any updates on this. Awaiting your reply.
Thanks for the follow-up! I am working on re-creating the scenario (I moved on to using Azure Container Apps instead, but I should hopefully be able to replicate with --debug flag).
I'm not able to replicate that current error as I'm stuck on a different error, where it continually pulls the image, starts the container, and then kills the container. I haven't been able to find logs as to why it's killing the container, so I don't know if this is an issue with my docker-compose.yaml or an Azure issue.
The docker-compose.yaml:
---
version: "2.1"
services:
babybuddy:
image: lscr.io/linuxserver/babybuddy:latest
container_name: babybuddy
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
ports:
- 8000:8000
restart: unless-stopped
Here are the logs from this situation:
{
"properties": {
"sku": "Standard",
"provisioningState": "Creating",
"containers": [{
"name": "babybuddy",
"properties": {
"image": "lscr.io/linuxserver/babybuddy:latest",
"ports": [{
"protocol": "TCP",
"port": 8000
}],
"environmentVariables": [{
"name": "TZ",
"value": "Europe/London"
}, {
"name": "PUID",
"value": "1000"
}, {
"name": "PGID",
"value": "1000"
}],
"instanceView": {
"restartCount": 15,
"currentState": {
"state": "Waiting",
"detailStatus": "CrashLoopBackOff: Back-off restarting failed"
},
"previousState": {
"state": "Terminated",
"startTime": "2022-10-17T17:13:24.489Z",
"exitCode": 100,
"finishTime": "2022-10-17T17:13:27.407Z",
"detailStatus": "Error"
},
"events": [{
"count": 1,
"firstTimestamp": "2022-10-16T00:08:30Z",
"lastTimestamp": "2022-10-16T00:08:30Z",
"name": "Pulling",
"message": "pulling image \"lscr.io/linuxserver/babybuddy@sha256:65622b9b0cc9f589f12317bf808bf593ebd34994868bf87d48f3b14cbed6866e\"",
"type": "Normal"
}, {
"count": 1,
"firstTimestamp": "2022-10-16T00:08:57Z",
"lastTimestamp": "2022-10-16T00:08:57Z",
"name": "Pulled",
"message": "Successfully pulled image \"lscr.io/linuxserver/babybuddy@sha256:65622b9b0cc9f589f12317bf808bf593ebd34994868bf87d48f3b14cbed6866e\"",
"type": "Normal"
}, {
"count": 212,
"firstTimestamp": "2022-10-16T00:09:02Z",
"lastTimestamp": "2022-10-16T07:28:37Z",
"name": "Started",
"message": "Started container",
"type": "Normal"
}, {
"count": 212,
"firstTimestamp": "2022-10-16T00:09:05Z",
"lastTimestamp": "2022-10-16T07:28:39Z",
"name": "Killing",
"message": "Killing container with id ee097b4c3b969911532c6fb801617b690e668c1a2dd154e2a36fc0cba66eff49.",
"type": "Normal"
}, {
"count": 1,
"firstTimestamp": "2022-10-16T07:26:29Z",
"lastTimestamp": "2022-10-16T07:26:29Z",
"name": "Pulling",
"message": "pulling image \"lscr.io/linuxserver/babybuddy@sha256:65622b9b0cc9f589f12317bf808bf593ebd34994868bf87d48f3b14cbed6866e\"",
"type": "Normal"
}, {
"count": 1,
"firstTimestamp": "2022-10-16T07:27:16Z",
"lastTimestamp": "2022-10-16T07:27:16Z",
"name": "Pulled",
"message": "Successfully pulled image \"lscr.io/linuxserver/babybuddy@sha256:65622b9b0cc9f589f12317bf808bf593ebd34994868bf87d48f3b14cbed6866e\"",
"type": "Normal"
}, {
"count": 957,
"firstTimestamp": "2022-10-16T07:28:51Z",
"lastTimestamp": "2022-10-17T16:48:25Z",
"name": "Started",
"message": "Started container",
"type": "Normal"
}, {
"count": 958,
"firstTimestamp": "2022-10-16T07:28:52Z",
"lastTimestamp": "2022-10-17T16:48:27Z",
"name": "Killing",
"message": "Killing container with id 90240c14b6b29991b4fb8bb8d9f71307f21726dde37da9f7ff118ef8e1ec7213.",
"type": "Normal"
}, {
"count": 1,
"firstTimestamp": "2022-10-16T11:34:49Z",
"lastTimestamp": "2022-10-16T11:34:49Z",
"name": "Pulling",
"message": "pulling image \"lscr.io/linuxserver/babybuddy@sha256:65622b9b0cc9f589f12317bf808bf593ebd34994868bf87d48f3b14cbed6866e\"",
"type": "Normal"
}, {
"count": 1,
"firstTimestamp": "2022-10-16T11:35:23Z",
"lastTimestamp": "2022-10-16T11:35:23Z",
"name": "Pulled",
"message": "Successfully pulled image \"lscr.io/linuxserver/babybuddy@sha256:65622b9b0cc9f589f12317bf808bf593ebd34994868bf87d48f3b14cbed6866e\"",
"type": "Normal"
}, {
"count": 1,
"firstTimestamp": "2022-10-17T16:48:27Z",
"lastTimestamp": "2022-10-17T16:48:27Z",
"name": "Pulling",
"message": "pulling image \"lscr.io/linuxserver/babybuddy@sha256:c1a0b2df526443481094ffc2996ac90cfad736f83ea749cc583a4ad1cb19f2ec\"",
"type": "Normal"
}, {
"count": 1,
"firstTimestamp": "2022-10-17T16:48:54Z",
"lastTimestamp": "2022-10-17T16:48:54Z",
"name": "Pulled",
"message": "Successfully pulled image \"lscr.io/linuxserver/babybuddy@sha256:c1a0b2df526443481094ffc2996ac90cfad736f83ea749cc583a4ad1cb19f2ec\"",
"type": "Normal"
}]
},
"resources": {
"requests": {
"memoryInGB": 1.0,
"cpu": 1.0
},
"limits": {
"memoryInGB": 1.0,
"cpu": 1.0
}
}
}
}],
"initContainers": [],
"restartPolicy": "Always",
"ipAddress": {
"ports": [{
"protocol": "TCP",
"port": 8000
}],
"ip": "20.241.142.229",
"type": "Public"
},
"osType": "Linux",
"instanceView": {
"events": [],
"state": "Unknown"
}
},
"id": "/subscriptions/32ea8a26-5b40-4838-b6cb-be5c89a57c16/resourceGroups/babybuddy-resource-group/providers/Microsoft.ContainerInstance/containerGroups/babybuddy",
"name": "babybuddy",
"type": "Microsoft.ContainerInstance/containerGroups",
"location": "eastus",
"tags": {
"docker-compose-application": "docker-compose-application"
}
}
I just tried with a slightly different compose, which I think is what my setup was like when I first reported, but am still in a crash backoff loop.
version: "2.1"
services:
babybuddy:
image: lscr.io/linuxserver/babybuddy:latest
container_name: babybuddy
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
- CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com
volumes:
- mydata:/config
ports:
- 8000:8000
restart: unless-stopped
volumes:
mydata:
driver: azure_file
driver_opts:
share_name: babybuddy
storage_account_name: babybuddystorageaccount
I've gone ahead and taken the instance down for now, so as to not waste resources on unsuccessful restarts:
pamelafox@Pamelas-MBP-2 babybuddy % docker compose --file docker-compose-vol.yaml down
WARNING: fileshare "babybuddystorageaccount/babybuddy" will NOT be deleted. Use 'docker volume rm' if you want to delete this volume
pollingTrackerBase#pollForStatus: failed to send HTTP request: StatusCode=0 -- Original Error: Get "": unsupported protocol scheme ""
@pamelafox The current error "CrashLoopBackOff" is not related to Azure CLI. This forum is for Azure-CLI issues only. If you are facing the initially reported CLI error again, please let us know. We would be happy to help.
Okay, I'll close this for now.
Okay, I'll close this for now.
@pamelafox If you are internal to Microsoft, Please ping me, I can share the internal contact details from Microsoft who can assist you further on error "CrashLoopBackOff" on ACI.
Ah okay, yes, I am internal. I'll ping, thanks!
@pamelafox Hi Pamela,
We have the same issue here from another customer. https://learn.microsoft.com/en-us/answers/questions/1153255/s6-overlay-support-in-container-instances
Do we have a mitigation so the application can run fine in ACI?
Sorry, I never got it working in ACI, I ported my app to ACA instead.
I would be really interested in a solution. I am having basically the same problem as @pamelafox and can't figure out why. The container run fine locally, so I don't think it has to do with the application code itself. Is there an option to get someone to look at this again?
@sebastiangeiger01 The team needs more information to replicate, can you rerun in debug mode? See this comment: https://github.com/Azure/azure-cli/issues/24159#issuecomment-1276977340 Could you also share your yaml file?
Thanks @pamelafox for the quick answer. Here is my yaml (don't worry, all code/logs shown is test data):
`version: '3' services: webapp: image: bikesharingcreg.azurecr.io/bike-sharing-webapp build: ./webapp restart: always env_file:
database
database: image: postgres restart: always volumes:
database.conf
adminer: image: adminer restart: always ports:
volumes:
data:
driver: azure_file
driver_opts:
share_name: bikeshare
storage_account_name: bikestorageacchsmainz
storage_account_key:
And here my output after running az container attack in --debug mode (cut to the part where the error occurs). Strangely, the error changed from the one in "lines = log.content.split('\n')" to "log = client.list_logs(resource_group_name, name, container_name)", so one line further in your referenced code, but see yourself:
'Start streaming logs: cli.azure.cli.core.auth.credential_adaptor: CredentialAdaptor.get_token: scopes=('https://management.core.windows.net//.default',), kwargs={} cli.azure.cli.core.auth.msal_authentication: UserCredential.get_token: scopes=('https://management.core.windows.net//.default',), claims=None, kwargs={} msal.application: Cache hit an AT msal.telemetry: Generate or reuse correlation_id: 0581d767-8cce-448c-a4e9-4d00187aa7d7 cli.azure.cli.core.sdk.policies: Request URL: 'https://management.azure.com/subscriptions/0de0ed36-01c1-4764-9632-6ec04ba50778/resourceGroups/bikesharing/providers/Microsoft.ContainerInstance/containerGroups/bike-sharing/containers/adminer/logs?api-version=2021-09-01' cli.azure.cli.core.sdk.policies: Request method: 'GET' cli.azure.cli.core.sdk.policies: Request headers: cli.azure.cli.core.sdk.policies: 'Accept': 'application/json' cli.azure.cli.core.sdk.policies: 'x-ms-client-request-id': '7567fd40-9b46-11ed-a2e3-ce451298a208' cli.azure.cli.core.sdk.policies: 'CommandName': 'container attach' cli.azure.cli.core.sdk.policies: 'ParameterSetName': '--resource-group --name --debug' cli.azure.cli.core.sdk.policies: 'User-Agent': 'AZURECLI/2.44.1 (HOMEBREW) azsdk-python-mgmt-containerinstance/9.1.0 Python/3.10.9 (macOS-13.1-arm64-arm-64bit)' cli.azure.cli.core.sdk.policies: 'Authorization': '**' cli.azure.cli.core.sdk.policies: Request body: cli.azure.cli.core.sdk.policies: This request has no body urllib3.connectionpool: Starting new HTTPS connection (1): management.azure.com:443 urllib3.connectionpool: https://management.azure.com:443 "GET /subscriptions/0de0ed36-01c1-4764-9632-6ec04ba50778/resourceGroups/bikesharing/providers/Microsoft.ContainerInstance/containerGroups/bike-sharing/containers/adminer/logs?api-version=2021-09-01 HTTP/1.1" 400 132 cli.azure.cli.core.sdk.policies: Response status: 400 cli.azure.cli.core.sdk.policies: Response headers: cli.azure.cli.core.sdk.policies: 'Cache-Control': 'no-cache' cli.azure.cli.core.sdk.policies: 'Pragma': 'no-cache' cli.azure.cli.core.sdk.policies: 'Content-Length': '132' cli.azure.cli.core.sdk.policies: 'Content-Type': 'application/json; charset=utf-8' cli.azure.cli.core.sdk.policies: 'Expires': '-1' cli.azure.cli.core.sdk.policies: 'x-ms-request-id': 'eastus:982d48b9-fcf9-4ef4-940e-38f313ad5cf9' cli.azure.cli.core.sdk.policies: 'x-ms-ratelimit-remaining-subscription-reads': '11996' cli.azure.cli.core.sdk.policies: 'x-ms-correlation-request-id': 'a28569f5-e0c6-4da2-823c-f68c5f54acd9' cli.azure.cli.core.sdk.policies: 'x-ms-routing-request-id': 'GERMANYWESTCENTRAL:20230123T175153Z:a28569f5-e0c6-4da2-823c-f68c5f54acd9' cli.azure.cli.core.sdk.policies: 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains' cli.azure.cli.core.sdk.policies: 'X-Content-Type-Options': 'nosniff' cli.azure.cli.core.sdk.policies: 'Date': 'Mon, 23 Jan 2023 17:51:52 GMT' cli.azure.cli.core.sdk.policies: Response content: cli.azure.cli.core.sdk.policies: {"error":{"code":"ContainerGroupDeploymentNotReady","message":"The container group 'bike-sharing' is not ready for the operation."}} Exception in thread Thread-1 (_stream_container_events_and_logs): Traceback (most recent call last): File "/opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 953, in run self._target(self._args, self._kwargs) File "/opt/homebrew/Cellar/azure-cli/2.44.1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/container/custom.py", line 862, in _stream_container_events_and_logs _stream_logs(container_client, resource_group_name, name, container_name, container_group.restart_policy) File "/opt/homebrew/Cellar/azure-cli/2.44.1/libexec/lib/python3.10/site-packages/azure/cli/command_modules/container/custom.py", line 815, in _stream_logs log = client.list_logs(resource_group_name, name, container_name) File "/opt/homebrew/Cellar/azure-cli/2.44.1/libexec/lib/python3.10/site-packages/azure/mgmt/containerinstance/operations/_containers_operations.py", line 115, in list_logs raise HttpResponseError(response=response, error_format=ARMErrorFormat) azure.core.exceptions.HttpResponseError: (ContainerGroupDeploymentNotReady) The container group 'bike-sharing' is not ready for the operation. Code: ContainerGroupDeploymentNotReady Message: The container group 'bike-sharing' is not ready for the operation. cli.azure.cli.core.sdk.policies: Request URL: 'https://management.azure.com/subscriptions/0de0ed36-01c1-4764-9632-6ec04ba50778/resourceGroups/bikesharing/providers/Microsoft.ContainerInstance/containerGroups/bike-sharing?api-version=2021-09-01' cli.azure.cli.core.sdk.policies: Request method: 'GET' cli.azure.cli.core.sdk.policies: Request headers: cli.azure.cli.core.sdk.policies: 'Accept': 'application/json' cli.azure.cli.core.sdk.policies: 'x-ms-client-request-id': '7567fd40-9b46-11ed-a2e3-ce451298a208' cli.azure.cli.core.sdk.policies: 'CommandName': 'container attach' cli.azure.cli.core.sdk.policies: 'ParameterSetName': '--resource-group --name --debug' cli.azure.cli.core.sdk.policies: 'User-Agent': 'AZURECLI/2.44.1 (HOMEBREW) azsdk-python-mgmt-containerinstance/9.1.0 Python/3.10.9 (macOS-13.1-arm64-arm-64bit)' cli.azure.cli.core.sdk.policies: 'Authorization': '*****' cli.azure.cli.core.sdk.policies: Request body: cli.azure.cli.core.sdk.policies: This request has no body urllib3.connectionpool: https://management.azure.com:443 "GET /subscriptions/0de0ed36-01c1-4764-9632-6ec04ba50778/resourceGroups/bikesharing/providers/Microsoft.ContainerInstance/containerGroups/bike-sharing?api-version=2021-09-01 HTTP/1.1" 200 None cli.azure.cli.core.sdk.policies: Response status: 200 cli.azure.cli.core.sdk.policies: Response headers: cli.azure.cli.core.sdk.policies: 'Cache-Control': 'no-cache' cli.azure.cli.core.sdk.policies: 'Pragma': 'no-cache' cli.azure.cli.core.sdk.policies: 'Transfer-Encoding': 'chunked' cli.azure.cli.core.sdk.policies: 'Content-Type': 'application/json; charset=utf-8' cli.azure.cli.core.sdk.policies: 'Content-Encoding': 'gzip' cli.azure.cli.core.sdk.policies: 'Expires': '-1' cli.azure.cli.core.sdk.policies: 'Vary': 'Accept-Encoding,Accept-Encoding' cli.azure.cli.core.sdk.policies: 'x-ms-request-id': 'eastus:f097e716-e986-4329-bf85-38ea51491486' cli.azure.cli.core.sdk.policies: 'x-ms-ratelimit-remaining-subscription-reads': '11975' cli.azure.cli.core.sdk.policies: 'x-ms-correlation-request-id': 'e8fb9b0b-8ff5-4f17-89f7-323ecd8c608a' cli.azure.cli.core.sdk.policies: 'x-ms-routing-request-id': 'GERMANYWESTCENTRAL:20230123T175157Z:e8fb9b0b-8ff5-4f17-89f7-323ecd8c608a' cli.azure.cli.core.sdk.policies: 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains' cli.azure.cli.core.sdk.policies: 'X-Content-Type-Options': 'nosniff' cli.azure.cli.core.sdk.policies: 'Date': 'Mon, 23 Jan 2023 17:51:57 GMT' cli.azure.cli.core.sdk.policies: Response content: cli.azure.cli.core.sdk.policies: {"properties":{"sku":"Standard","provisioningState":"Succeeded","containers":[{"name":"adminer","properties":{"image":"adminer","ports":[{"protocol":"TCP","port":8080}],"environmentVariables":[],"instanceView":{"restartCount":0,"currentState":{"state":"Running","startTime":"2023-01-23T17:51:49.108Z","detailStatus":""},"events":[{"count":1,"firstTimestamp":"2023-01-23T17:50:44Z","lastTimestamp":"2023-01-23T17:50:44Z","name":"Pulling","message":"pulling image \"adminer@sha256:2ca89c714c8adc4fa870c08198d72f51878039e6dfe974386e0a166814a24f55\"","type":"Normal"},{"count":1,"firstTimestamp":"2023-01-23T17:51:32Z","lastTimestamp":"2023-01-23T17:51:32Z","name":"Pulled","message":"Successfully pulled image \"adminer@sha256:2ca89c714c8adc4fa870c08198d72f51878039e6dfe974386e0a166814a24f55\"","type":"Normal"},{"count":1,"firstTimestamp":"2023-01-23T17:51:49Z","lastTimestamp":"2023-01-23T17:51:49Z","name":"Started","message":"Started container","type":"Normal"}]},"resources":{"requests":{"memoryInGB":1.0,"cpu":1.0},"limits":{"memoryInGB":1.0,"cpu":1.0}}}},{"name":"webapp","properties":{"image":"bikesharingcreg.azurecr.io/bike-sharing-webapp","ports":[{"protocol":"TCP","port":80}],"environmentVariables":[{"name":"POSTGRES_DB","value":"postgres"},{"name":"POSTGRES_PASSWORD","value":"example"},{"name":"SECURITY_PASSWORD_SALT","value":"146585145368132386173505678016728509634"},{"name":"SECURITY_EMAIL_SENDER","value":"noreply.bikerental@gmail.com"},{"name":"MAIL_PASSWORD","value":"rwfigoblrwefirkb"},{"name":"POSTGRES_PORT","value":"5432"},{"name":"POSTGRES_USER","value":"admin"},{"name":"SECRET_KEY","value":"pf9Wkove4IKEAXvy-cQkeDPhv9Cb3Ag-wyJILbq_dFw"},{"name":"MAIL_SERVER","value":"smtp.gmail.com"},{"name":"MAIL_PORT","value":"587"},{"name":"MAIL_USE_TLS","value":"true"},{"name":"MAIL_USERNAME","value":"noreply.bikerental@gmail.com"},{"name":"POSTGRES_HOST","value":"database"}],"instanceView":{"restartCount":1,"currentState":{"state":"Waiting","detailStatus":"CrashLoopBackOff: Back-off restarting failed"},"previousState":{"state":"Terminated","startTime":"2023-01-23T17:51:49.716Z","exitCode":1,"finishTime":"2023-01-23T17:51:53.685Z","detailStatus":"Error"},"events":[{"count":1,"firstTimestamp":"2023-01-23T17:50:44Z","lastTimestamp":"2023-01-23T17:50:44Z","name":"Pulling","message":"pulling image \"bikesharingcreg.azurecr.io/bike-sharing-webapp:latest\"","type":"Normal"},{"count":1,"firstTimestamp":"2023-01-23T17:51:32Z","lastTimestamp":"2023-01-23T17:51:32Z","name":"Pulled","message":"Successfully pulled image \"bikesharingcreg.azurecr.io/bike-sharing-webapp:latest\"","type":"Normal"},{"count":1,"firstTimestamp":"2023-01-23T17:51:49Z","lastTimestamp":"2023-01-23T17:51:49Z","name":"Started","message":"Started container","type":"Normal"}]},"resources":{"requests":{"memoryInGB":1.0,"cpu":1.0},"limits":{"memoryInGB":1.0,"cpu":1.0}}}},{"name":"database","properties":{"image":"postgres","ports":[],"environmentVariables":[{"name":"POSTGRES_HOST","value":"database"},{"name":"POSTGRES_PORT","value":"5432"},{"name":"POSTGRES_DB","value":"postgres"},{"name":"POSTGRES_USER","value":"admin"},{"name":"POSTGRES_PASSWORD","value":"example"}],"instanceView":{"restartCount":0,"currentState":{"state":"Running","startTime":"2023-01-23T17:51:49.765Z","detailStatus":""},"events":[{"count":1,"firstTimestamp":"2023-01-23T17:50:44Z","lastTimestamp":"2023-01-23T17:50:44Z","name":"Pulling","message":"pulling image \"postgres@sha256:1629bc36c63077ef0ef8b6ea7ff1d601a5211019f15f6b3fd03084788dfaae55\"","type":"Normal"},{"count":1,"firstTimestamp":"2023-01-23T17:51:32Z","lastTimestamp":"2023-01-23T17:51:32Z","name":"Pulled","message":"Successfully pulled image \"postgres@sha256:1629bc36c63077ef0ef8b6ea7ff1d601a5211019f15f6b3fd03084788dfaae55\"","type":"Normal"},{"count":1,"firstTimestamp":"2023-01-23T17:51:49Z","lastTimestamp":"2023-01-23T17:51:49Z","name":"Started","message":"Started container","type":"Normal"}]},"resources":{"requests":{"memoryInGB":1.0,"cpu":1.0},"limits":{"memoryInGB":1.0,"cpu":1.0}},"volumeMounts":[{"name":"data","mountPath":"/home/postgres"}]}},{"name":"aci--dns--sidecar","properties":{"image":"docker/aci-hostnames-sidecar:1.0","command":["/hosts","adminer","webapp","database"],"ports":[],"environmentVariables":[],"instanceView":{"restartCount":0,"currentState":{"state":"Running","startTime":"2023-01-23T17:51:48.808Z","detailStatus":""},"events":[{"count":1,"firstTimestamp":"2023-01-23T17:50:44Z","lastTimestamp":"2023-01-23T17:50:44Z","name":"Pulling","message":"pulling image \"docker/aci-hostnames-sidecar@sha256:e16f3eadf23c6d2eceabfeae2fcf6478e84c5b8acaaf5cffd34cf964798004a5\"","type":"Normal"},{"count":1,"firstTimestamp":"2023-01-23T17:51:32Z","lastTimestamp":"2023-01-23T17:51:32Z","name":"Pulled","message":"Successfully pulled image \"docker/aci-hostnames-sidecar@sha256:e16f3eadf23c6d2eceabfeae2fcf6478e84c5b8acaaf5cffd34cf964798004a5\"","type":"Normal"},{"count":1,"firstTimestamp":"2023-01-23T17:51:48Z","lastTimestamp":"2023-01-23T17:51:48Z","name":"Started","message":"Started container","type":"Normal"}]},"resources":{"requests":{"memoryInGB":0.1,"cpu":0.01}}}}],"initContainers":[],"imageRegistryCredentials":[{"server":"bikesharingcreg.azurecr.io","username":"00000000-0000-0000-0000-000000000000"}],"restartPolicy":"Always","ipAddress":{"ports":[{"protocol":"TCP","port":8080},{"protocol":"TCP","port":80}],"ip":"20.246.201.152","type":"Public"},"osType":"Linux","volumes":[{"name":"data","azureFile":{"shareName":"bikeshare","readOnly":false,"storageAccountName":"bikestorageacchsmainz"}}],"instanceView":{"events":[{"count":1,"firstTimestamp":"2023-01-23T17:45:57.343Z","lastTimestamp":"2023-01-23T17:45:57.343Z","name":"SuccessfulMountAzureFileVolume","message":"Successfully mounted Azure File Volume.","type":"Normal"},{"count":1,"firstTimestamp":"2023-01-23T17:51:47.664Z","lastTimestamp":"2023-01-23T17:51:47.664Z","name":"SuccessfulMountAzureFileVolume","message":"Successfully mounted Azure File Volume.","type":"Normal"}],"state":"Running"}},"id":"/subscriptions/0de0ed36-01c1-4764-9632-6ec04ba50778/resourceGroups/bikesharing/providers/Microsoft.ContainerInstance/containerGroups/bike-sharing","name":"bike-sharing","type":"Microsoft.ContainerInstance/containerGroups","location":"eastus","tags":{"docker-compose-application":"docker-compose-application"}} cli.knack.cli: Event: CommandInvoker.OnTransformResult [<function _resource_group_transform at 0x1022e3010>, <function _x509_from_base64_to_hex_transform at 0x1022e30a0>] cli.knack.cli: Event: CommandInvoker.OnFilterResult [] cli.knack.cli: Event: Cli.SuccessfulExecute [] cli.knack.cli: Event: Cli.PostExecute [<function AzCliLogging.deinit_cmd_metadata_logging at 0x10229f5b0>] az_command_data_logger: exit code: 0 cli.main: Command ran in 79.039 seconds (init: 0.080, invoke: 78.959) telemetry.main: Begin splitting cli events and extra events, total events: 1 telemetry.client: Accumulated 0 events. Flush the clients. telemetry.main: Finish splitting cli events and extra events, cli events: 1 telemetry.save: Save telemetry record of length 3307 in cache telemetry.check: Returns Positive. telemetry.main: Begin creating telemetry upload process. telemetry.process: Creating upload process: "/opt/homebrew/Cellar/azure-cli/2.44.1/libexec/bin/python /opt/homebrew/Cellar/azure-cli/2.44.1/libexec/lib/python3.10/site-packages/azure/cli/telemetry/init.py /Users/sebgeiger/.azure" telemetry.process: Return from creating process telemetry.main: Finish creating telemetry upload process.'
The "webapp" container is stuck in "waiting" after this. I am also using macOS and running Azure CLI verison '2.44.1'.
Thanks! I'm not on ACI team but maybe this is enough info for @navba-msft to take another look.
Small update: a colleague of mine tried it with a windows machine to deploy the application to azure, worked fine for him. Seems like an azure cli bug for macOS.
Also on Linux
Okay, I'll close this for now.
Why has this been closed? It is still an issue
I've reopened, please share the logs when you run the command in --debug mode.
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @macolso.
Author: | pamelafox |
---|---|
Assignees: | - |
Labels: | `Service Attention`, `Container Instances`, `question`, `needs-author-feedback`, `Auto-Assign` |
Milestone: | - |
I believe this happens when no logs are returned from the Azure side, for example the container gets killed, the stream returns an object log
whose content
attribute is None
.
Hi, we're sending this friendly reminder because we haven't heard back from you in a while. We need more information about this issue to help address it. Please be sure to give us your input within the next 7 days. If we don't hear back from you within 14 days of this comment the issue will be automatically closed. Thank you!
Hello, I have this same issue when trying to run a dotnet exe in az container in azure release pipeline (PowerShell Core inline script) and then running az container attach. Here is the inline script:
// Create the container instance az container create --name $containerName --resource-group $resourceGroup ...
// Attach to the container instance az container attach --name $containerName --resource-group $resourceGroup
This is not always reproducible (I assume because of some kind of timing variance), but sometimes I will get the described error (copied below) which causes the pipeline stage task to fail. I have failOnStdErr set to true, because I actually do want the stage task to fail if my program fails due to errors. The problem is that this error has nothing to do with the successful execution of my program. but it still fails the release pipeline task if I get this error:
2024-05-07T08:12:08.7506166Z ##[error]Exception in thread Thread-1 (_stream_container_events_and_logs): Traceback (most recent call last): File "/opt/az/lib/python3.11/threading.py", line 1045, in _bootstrap_inner self.run() File "/opt/az/lib/python3.11/threading.py", line 982, in run self._target(*self._args, **self._kwargs) File "/opt/az/lib/python3.11/site-packages/azure/cli/command_modules/container/custom.py", line 912, in _stream_container_events_and_logs _stream_logs(container_client, resource_group_name, name, container_name, container_group.restart_policy) File "/opt/az/lib/python3.11/site-packages/azure/cli/command_modules/container/custom.py", line 866, in _stream_logs lines = log.content.split('\n') 2024-05-07T08:12:08.7510744Z ##[error]Script has output to stderr. Failing as failOnStdErr is set to true.
I am open to other workarounds to be able to run my program in an az container instance within an azure pipeline, but it does seem like this could be considered a normal use case for az container instance and az container attach. Will this be fixed or looked at further? I see an open PR #26410.
Why is this closed ?
Describe the bug
When I ran the attach command, it streamed the logs at first and then ended in an AttributeError.
Command Name
az container attach --resource-group babybuddy-resource-group --name babybuddy
Errors:
Here's the tail end of the output that shows the error:
That error is from this line of code: https://github.com/Azure/azure-cli/blob/dev/src/azure-cli/azure/cli/command_modules/container/custom.py#L814 That line presume that log.content will be a string, but it appears to be a None for me. I don't know if that's expected, or if there's a larger error around the None output.
az container logs
also producesNone
when I run it.To Reproduce:
I created a container instance using docker compose up and a docker-compose.yml. I can share that with you if it helps.
I then ran the command:
az container attach --resource-group babybuddy-resource-group --name babybuddy
Expected Behavior
I don't expect a runtime error.
Environment Summary
Additional Context