Azure / acr

Azure Container Registry samples, troubleshooting tips and references
https://aka.ms/acr
Other
164 stars 111 forks source link

Pushing images to Azure Container Registry extremely slow #214

Closed TurtleBeach closed 4 years ago

TurtleBeach commented 5 years ago

Is this a BUG REPORT or FEATURE REQUEST?: Bug Report What happened?: Multiple attempts to push ~700MB image (nanoserver + powershell + java SE) take over 2 hours then fail with "unauthorized: authentication required" What did you expect to happen?: The image to be placed in container registry (image based only on nanoserver succeeded) How do you reproduce it (as minimally and precisely as possible)?: Build image locally with Docker, confirm it runs with minimal (echo) action. Logged in (az acr login). Also tried changing registry from Basic to Standard, which did not help. Anything else we need to know?: tried with Docker Push … from W10 command line and from Powershell (same command) Environment (if applicable to the issue):

sajayantony commented 5 years ago

Were you able to try to use something like ACR Tasks to build/push your image just to try eliminating network issues. Also could you please try https://azurespeedtest.azurewebsites.net/

If these are all good then a possible work around would be to try using the admin credentials just to verify if it is due to the access token expiration for ACR. We are increasing access token timeout in the mean time.

profesor79 commented 5 years ago

Hi guys, I am having the same issue when uploading 17GB container into acr. Unfortunately, docker login is no help in my case, even trying to login to acr using az acr login in the second terminal is not helping me. Speed test Acr sku: basic 100GB.

TurtleBeach commented 5 years ago

Using Admin credentials resolved the token timeout issue for me (though I think Service Principal credentials would also work - they do in other contexts). In my case it appears there were also issues with the local (on-island) service provider. In particular, there seemed to be an inordinate number of hops between the Cayman Islands and East US in Northern Virginia. I was able to get my corporate network manager to switch to another on-island provider with more upstream bandwidth, and for the last few days the time has gone from 4-5 hours to 2 minutes (a slightly larger file than previously also).

profesor79 commented 5 years ago

@TurtleBeach thanks for the hint, I used that before connecting to this issue. Honestly, I am thinking that this could be related to image size, as I had a script running in a separate terminal that was logging me every 5 minutes. Will try with 1 minute now and let you know.

#!/usr/bin/env bash

for (( ; ; ))
do
   echo "Pres CTRL+C to stop..."
   az acr login --name myacr
   sleep 5m
done
TurtleBeach commented 5 years ago

@profesor79 I just looked back at some e-mails from MS outside of Github, and the key is that you must login to your container registry using Docker login: docker login your-registry-name , then provide admin credentials. Also, even though Registry names are not case sensitive, you must use all lowercase when entering the registry name. You should make sure you are logged out of az acr, then login using Docker login, then try the push.

profesor79 commented 5 years ago

@TurtleBeach Thanks for the hint. The funny thing I found that az acr is missing loguot command :) Now I am pushing the image from other server and will let you know how that goes. Logging even every minute did not help (I was stupidly believing that the key will be refreshed, but it looks like the command sticks to the key gathered when the process was executed)

SteveLasker commented 5 years ago

Sorry for the pain folks. I did want to let folks know we understand the challenge with the difficult choice between the admin account, which i call “the demo account, not to be used in production”, and service principals. Admin account has push/pull rights and theirs only 1. So you can’t break down granular rights to different processes. However, it’s highlg reliable because theirs no external dependency. Service principals have granular rights and have multiples, but have timeouts, are difficult to provision and haven’t been reliable We are working on time based, repo scoped tokens that we hope to have available this fall. This will provide best practices with performance and reliability. Thanks for your patience and feedback Steve

TurtleBeach commented 5 years ago

@profesor79 I made an incorrect statement. You are correct that you can't logout of acr. The point is to be sure you are logged out of Docker on your local machine, then log back in with Admin credentials (mindful of comments above @SteveLasker).

profesor79 commented 5 years ago

@SteveLasker I know that difference re accounts types but in edge case, most of us are seeking solution :). The 16.9GiB layer was pushed using @TurtleBeach hint, so I am happy at this moment, especially that this image is used only in our development team.

Thank you guys for help and awesome work with ACR!

SteveLasker commented 5 years ago

Yup, we understand the problem we’ve put people in and we’re working on a better long term solution. For now, this is sometimes the best we have. We just want users to understand the trade offs.

msyihchen commented 5 years ago

@profesor79 cat ~/.docker/config.json. Docker credentials are cached here. az acr login calls underlying docker login command so credential is saved at ~/.docker/config.json. You need to do docker logout xxx.azurecr.io to remove the credential. Does the original problem still exist? @TurtleBeach

dersia commented 5 years ago

We are experiencing the same issue. We are actually using ACR to build, where as the build is done within 3 minutes, pushing takes another approx. 20 minutes.

2019/07/08 13:33:08 Downloading source code...
2019/07/08 13:33:10 Finished downloading source code
2019/07/08 13:33:10 Using acb_vol_7485bf58-0fe4-4d51-8fc2-0379cd7879e1 as the home volume
2019/07/08 13:33:10 Setting up Docker configuration...
2019/07/08 13:33:11 Successfully set up Docker configuration
2019/07/08 13:33:11 Logging in to registry: acr
2019/07/08 13:33:17 Successfully logged into acr
2019/07/08 13:33:17 Executing step ID: build. Timeout(sec): 28800, Working directory: '', Network: ''
2019/07/08 13:33:17 Scanning for dependencies...
2019/07/08 13:33:18 Successfully scanned dependencies
2019/07/08 13:33:18 Launching container with name: build
Sending build context to Docker daemon  3.787MB
Step 1/12 : FROM node:8 as builder
8: Pulling from library/node
6f2f362378c5: Already exists
494c27a8a6b8: Already exists
7596bb83081b: Already exists
372744b62d49: Already exists
615db220d76c: Already exists
afaefeaac9ee: Pulling fs layer
f70dffb8ed80: Pulling fs layer
551f1f61f881: Pulling fs layer
f9237cec1fb5: Pulling fs layer
f9237cec1fb5: Waiting
afaefeaac9ee: Verifying Checksum
afaefeaac9ee: Download complete
551f1f61f881: Verifying Checksum
551f1f61f881: Download complete
afaefeaac9ee: Pull complete
f70dffb8ed80: Verifying Checksum
f70dffb8ed80: Download complete
f9237cec1fb5: Verifying Checksum
f9237cec1fb5: Download complete
f70dffb8ed80: Pull complete
551f1f61f881: Pull complete
f9237cec1fb5: Pull complete
Digest: sha256:123
Status: Downloaded newer image for node:8
 ---> a7dabdc7cd4b
Step 2/12 : RUN mkdir /usr/src/app
 ---> Running in 884bb622281f
Removing intermediate container 884bb622281f
 ---> 9387cf812a8d
Step 3/12 : WORKDIR /usr/src/app
 ---> Running in d78ccf4a5e06
Removing intermediate container d78ccf4a5e06
 ---> a96516c07277
Step 4/12 : ENV PATH /usr/src/app/node_modules/.bin:$PATH
 ---> Running in b96c2e331793
Removing intermediate container b96c2e331793
 ---> ccd1f073e856
Step 5/12 : COPY package.json /usr/src/app/package.json
 ---> 4d5d2cc806ae
Step 6/12 : RUN yarn install --silent
 ---> Running in 0f055f1af446
Removing intermediate container 0f055f1af446
 ---> 8051d99e22b6
Step 7/12 : COPY . /usr/src/app
 ---> 997d7be4ca65
Step 8/12 : RUN yarn run build::linux
 ---> Running in b2b67c7630a4
yarn run v1.15.2
Done in 63.55s.
Removing intermediate container b2b67c7630a4
 ---> 4e3f33e57ce1
Step 9/12 : FROM nginx:1.13.9-alpine
1.13.9-alpine: Pulling from library/nginx
ff3a5c916c92: Already exists
b4f3ef22ce5b: Pulling fs layer
8a6541d11dc3: Pulling fs layer
7e869e2dcf68: Pulling fs layer
7e869e2dcf68: Verifying Checksum
7e869e2dcf68: Download complete
8a6541d11dc3: Verifying Checksum
8a6541d11dc3: Download complete
b4f3ef22ce5b: Verifying Checksum
b4f3ef22ce5b: Download complete
b4f3ef22ce5b: Pull complete
8a6541d11dc3: Pull complete
7e869e2dcf68: Pull complete
Digest: sha256:456
Status: Downloaded newer image for nginx:1.13.9-alpine
 ---> 537527661905
Step 10/12 : COPY --from=builder /usr/src/app/build /usr/share/nginx/html
 ---> 70550208e9c1
Step 11/12 : EXPOSE 80
 ---> Running in 01f08752571f
Removing intermediate container 01f08752571f
 ---> 07436d6fcd9e
Step 12/12 : CMD ["nginx", "-g", "daemon off;"]
 ---> Running in 5d355a375613
Removing intermediate container 5d355a375613
 ---> 729b200e2163
Successfully built 729b200e2163
Successfully tagged acr/imagename:latest
Successfully tagged acr/imagename:abc
Successfully tagged acr/imagename:def
Successfully tagged acr/imagename:ghi
Successfully tagged acr/imagename:jkl
Successfully tagged acr/imagename:mno
2019/07/08 13:36:21 Successfully executed container: build
2019/07/08 13:36:21 Executing step ID: push. Timeout(sec): 1800, Working directory: '', Network: ''
2019/07/08 13:36:21 Pushing image: acr/imagename:latest, attempt 1
The push refers to repository [acr/imagename]
9a40ad303d23: Preparing
5efc006b5ed6: Preparing
b0efd61aab3d: Preparing
fd8dbe3c801b: Preparing
cd7100a72410: Preparing
cd7100a72410: Layer already exists
b0efd61aab3d: Layer already exists
fd8dbe3c801b: Layer already exists
5efc006b5ed6: Layer already exists
9a40ad303d23: Pushed
latest: digest: sha256:789 size: 1364
2019/07/08 13:42:37 Successfully pushed image: acr/imagename:latest
2019/07/08 13:42:37 Pushing image: acr/imagename:abc, attempt 1
The push refers to repository [acr/imagename]
9a40ad303d23: Preparing
5efc006b5ed6: Preparing
b0efd61aab3d: Preparing
fd8dbe3c801b: Preparing
cd7100a72410: Preparing
9a40ad303d23: Layer already exists
b0efd61aab3d: Layer already exists
cd7100a72410: Layer already exists
fd8dbe3c801b: Layer already exists
5efc006b5ed6: Layer already exists
abc: digest: sha256:789 size: 1364
2019/07/08 13:45:48 Successfully pushed image: acr/imagename:abc
2019/07/08 13:45:48 Pushing image: acr/imagename:def, attempt 1
The push refers to repository [acr/imagename]
9a40ad303d23: Preparing
5efc006b5ed6: Preparing
b0efd61aab3d: Preparing
fd8dbe3c801b: Preparing
cd7100a72410: Preparing
cd7100a72410: Layer already exists
b0efd61aab3d: Layer already exists
5efc006b5ed6: Layer already exists
fd8dbe3c801b: Layer already exists
9a40ad303d23: Layer already exists
def: digest: sha256:789 size: 1364
2019/07/08 13:47:45 Successfully pushed image: acr/imagename:def
2019/07/08 13:47:45 Pushing image: acr/imagename:ghi, attempt 1
The push refers to repository [acr/imagename]
9a40ad303d23: Preparing
5efc006b5ed6: Preparing
b0efd61aab3d: Preparing
fd8dbe3c801b: Preparing
cd7100a72410: Preparing
9a40ad303d23: Layer already exists
b0efd61aab3d: Layer already exists
cd7100a72410: Layer already exists
fd8dbe3c801b: Layer already exists
5efc006b5ed6: Layer already exists
ghi: digest: sha256:789 size: 1364
2019/07/08 13:51:40 Successfully pushed image: acr/imagename:ghi
2019/07/08 13:51:40 Pushing image: acr/imagename:jkl, attempt 1
The push refers to repository [acr/imagename]
9a40ad303d23: Preparing
5efc006b5ed6: Preparing
b0efd61aab3d: Preparing
fd8dbe3c801b: Preparing
cd7100a72410: Preparing
fd8dbe3c801b: Layer already exists
b0efd61aab3d: Layer already exists
5efc006b5ed6: Layer already exists
cd7100a72410: Layer already exists
9a40ad303d23: Layer already exists
jkl: digest: sha256:789 size: 1364
2019/07/08 13:54:21 Successfully pushed image: acr/imagename:jkl
2019/07/08 13:54:21 Pushing image: acr/imagename:mno, attempt 1
The push refers to repository [acr/imagename]
9a40ad303d23: Preparing
5efc006b5ed6: Preparing
b0efd61aab3d: Preparing
fd8dbe3c801b: Preparing
cd7100a72410: Preparing
9a40ad303d23: Layer already exists
5efc006b5ed6: Layer already exists
cd7100a72410: Layer already exists
b0efd61aab3d: Layer already exists
fd8dbe3c801b: Layer already exists
mno: digest: sha256:789 size:1364
2019/07/08 13:57:13 Successfully pushed image: acr/imagename:mno
2019/07/08 13:57:13 Step ID: build marked as successful (elapsed time in seconds: 183.590577)
2019/07/08 13:57:13 Populating digests for step ID: build...
2019/07/08 13:57:28 Successfully populated digests for step ID: build
2019/07/08 13:57:28 Step ID: push marked as successful (elapsed time in seconds: 1252.421259)
2019/07/08 13:57:28 The following dependencies were found:
2019/07/08 13:57:28
- image:
    registry: acr
    repository: imagename
    tag: latest
    digest: sha256:789
  runtime-dependency:
    registry: registry.hub.docker.com
    repository: library/nginx
    tag: 1.13.9-alpine
    digest: sha256:456
  buildtime-dependency:
  - registry: registry.hub.docker.com
    repository: library/node
    tag: "8"
    digest: sha256:123
  git: {}
- image:
    registry: acr
    repository: imagename
    tag: abc
    digest: sha256:789
  runtime-dependency:
    registry: registry.hub.docker.com
    repository: library/nginx
    tag: 1.13.9-alpine
    digest: sha256:456
  buildtime-dependency:
  - registry: registry.hub.docker.com
    repository: library/node
    tag: "8"
    digest: sha256:123
  git: {}
- image:
    registry: acr
    repository: imagename
    tag: def
    digest: sha256:789
  runtime-dependency:
    registry: registry.hub.docker.com
    repository: library/nginx
    tag: 1.13.9-alpine
    digest: sha256:456
  buildtime-dependency:
  - registry: registry.hub.docker.com
    repository: library/node
    tag: "8"
    digest: sha256:123
  git: {}
- image:
    registry: acr
    repository: imagename
    tag: ghi
    digest: sha256:789
  runtime-dependency:
    registry: registry.hub.docker.com
    repository: library/nginx
    tag: 1.13.9-alpine
    digest: sha256:456
  buildtime-dependency:
  - registry: registry.hub.docker.com
    repository: library/node
    tag: "8"
    digest: sha256:123
  git: {}
- image:
    registry: acr
    repository: imagename
    tag: jkl
    digest: sha256:789
  runtime-dependency:
    registry: registry.hub.docker.com
    repository: library/nginx
    tag: 1.13.9-alpine
    digest: sha256:456
  buildtime-dependency:
  - registry: registry.hub.docker.com
    repository: library/node
    tag: "8"
    digest: sha256:123
  git: {}
- image:
    registry: acr
    repository: imagename
    tag: "mno"
    digest: sha256:789
  runtime-dependency:
    registry: registry.hub.docker.com
    repository: library/nginx
    tag: 1.13.9-alpine
    digest: sha256:456
  buildtime-dependency:
  - registry: registry.hub.docker.com
    repository: library/node
    tag: "8"
    digest: sha256:123
  git: {}

Run ID: **** was successful after 24m21s

As you can see by the logs, task started at 2019/07/08 13:33:08 Downloading source code... Build is done 2019/07/08 13:36:21 Successfully executed container: build And pushing takes between 3 to 6 minutes. And if you are pushing with serveral tags this sums up very quickly. I think it shouldn't even take 3 minutes for pushing another tag...

PeterRockstars commented 5 years ago

We regularly have the problem of extremely slow pushes to ACR. It seems to be more prevalent on Tuesdays. Could this have anything to do with the phenomenon known as "patch Tuesday" (Patch Tuesday)?

akuryan commented 5 years ago

I experience the same issue now - will check, if tomorrow (after "patch Tuesday") it will be better

marko-k0 commented 5 years ago

I have issues today as well. :-(

germaino commented 5 years ago

Same issue for me this is a nightmare

dersia commented 5 years ago

This was yesterday, so not on patch Tuesday. And as I said before this is a push from acr to acr...

sajayantony commented 5 years ago

We are investigating this degradation of push and high latency.

ghost commented 5 years ago

I can confirm this situation in 2 diferrent ACR (in europe region). Im trying to push/delete images and its very slow. And my kubernetes clusters can't pull images from there

avaneerd commented 5 years ago

We're pushing images from Azure DevOps. Pushing takes anywhere from ~10 seconds up to 10 mins. Yesterday and today are especially bad going up to 20 minutes and sometimes even failing.

ghost commented 5 years ago

Can confirm its really slow from Sweden to Westeurope with a medium sized image: 109mb. 10 minutes and counting..^^

sajayantony commented 5 years ago

The current degradation should be mitigated. If not please respond on this thread. We are conducting an RCA and will provide more information on this thread.

akuryan commented 5 years ago

For me it is resolved now

sajayantony commented 5 years ago

Sharing a brief RCA for the benefit of those impacted - Region of impact was West Europe. Our systems detected overall increase in latency from both our internal agents as well as external clients. As a part of the diagnosis, we identified the network connectivity to a hot path component, was the primary root cause and appropriately mitigated and postmortem backlog items are being addressed.

pmrochaubi commented 5 years ago

@sajayantony this is happening againg ..

JohannesEH commented 5 years ago

Yes we are experiencing slow pushes and timeouts in our pipelines as well. Please reopen issue.

PeterRockstars commented 5 years ago

We have also had this issue today. Does it have to do with the disruption in Azure Storage that's happening today?

jtdaugh commented 5 years ago

Still seeing this issue. < 1mbps pushes

kwaazaar commented 5 years ago

We've been experiencing this for months. We are pushing from Azure Devops, running build agents in AKS in West Europe, pushing to ACR in West Europe. Sometimes it take 20 minutes, sometimes just 3 minutes (still too long I think). Our build pipelines time out, which is getting very frustrating.

sajayantony commented 5 years ago

We are seeing a regular load increase, much more than all other regions in WEU for the past few months and growing. We have reprioritized work on WEU for increasing capacity and scale limits. It will take us a few weeks to address this. We are actively working on this.

SteveLasker commented 5 years ago

I just wanted to add that while Sajay mentioned it will take a few weeks, it's not that we're waiting to prioritize the work. Support issues are always our top priority. We discussed yesterday re-prioritizing other feature work to assure we can support this very very popular region. The work involves more diagnostic monitoring and putting in better barriers to assure very busy customers don't impact other customers. Because registries are core to production workloads, we want to be very sure the work we do makes things better, not introduce more instability. As we learn more, we'll roll the fixes out to WEU and all regions to assure we leverage the investments.

As a mitigation, if you're running workloads in other regions, even on-prem or developers working outside of the WEU data center, you might consider leveraging the geo-replication capabilities of ACR to spread the load to other regions. We deeply apologize for any inconvenience this is causing and are working hard and smart to address the issue.

divad4686 commented 5 years ago

For me not only goes really slow, but a lot of the time it fails with a socket timeout error. It is becoming really hard to push images to ACR right now, and we can't deploy to production without this.

This is the error from docker-compose push command:

Traceback (most recent call last):
  File "site-packages/urllib3/response.py", line 360, in _error_catcher
  File "site-packages/urllib3/response.py", line 442, in read
  File "http/client.py", line 449, in read
  File "http/client.py", line 483, in readinto
  File "http/client.py", line 578, in _readinto_chunked
  File "http/client.py", line 546, in _get_chunk_left
  File "http/client.py", line 506, in _read_next_chunk_size
  File "socket.py", line 586, in readinto
socket.timeout: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "bin/docker-compose", line 6, in <module>
  File "compose/cli/main.py", line 71, in main
  File "compose/cli/main.py", line 127, in perform_command
  File "compose/cli/main.py", line 771, in push
  File "compose/project.py", line 596, in push
  File "compose/service.py", line 1228, in push
  File "compose/progress_stream.py", line 112, in get_digest_from_push
  File "compose/progress_stream.py", line 25, in stream_output
  File "compose/utils.py", line 62, in split_buffer
  File "compose/utils.py", line 38, in stream_as_text
  File "site-packages/docker/api/client.py", line 311, in _stream_helper
  File "site-packages/urllib3/response.py", line 459, in read
  File "contextlib.py", line 99, in __exit__
  File "site-packages/urllib3/response.py", line 365, in _error_catcher
urllib3.exceptions.ReadTimeoutError: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out.
evictor commented 5 years ago

Currently experiencing this and also blocking production operations for us including crippling some build processes with timeouts. May have to set up own private registry and abandon ACR. 😬

sajayantony commented 5 years ago

@evictor - West Europe is facing large volumes and we are actively investigating and trying to mitigate this first. Do you have any option to try another region or is all your workload in WEU?

evictor commented 5 years ago

Sorry should have mentioned my affected registry is in East US actually.

jtdaugh commented 5 years ago

Same - pushing to East US

On Aug 1, 2019, at 12:55 PM, Ezekiel Victor notifications@github.com wrote:

Sorry should have mentioned my affected registry is in East US actually.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

sajayantony commented 5 years ago

@evictor @jtdaugh - was mitigated for EUS. Give that ACR is regional would you be able to open an another issue if you see problem for EUS since we are tracking a workload for WEU on this thread. Sorry about the confusion and thank you for understanding.

sajayantony commented 5 years ago

The latency issue was diagnosed and a fix rolled out to all affected regions and not just to West Europe. We apologize for this inconvenience and are taking steps which will reduce our time to detection and mitigation. All operation latency are back within acceptable ranges.

ina-stoyanova commented 4 years ago

Hey, unfortunately, we can see this happening again for some cases in the US East region. While it used to take up to 1 - 1.5 mins, now it timed out with 502 Bad Gateway, without us changing anything in the meantime.

Screen Shot 2019-09-19 at 12 37 37

Anyone else experiencing issues?

msyihchen commented 4 years ago

Apologize for the inconvenience. Our storage accounts in US East region had some issue early today. It's has been mitigated. Can you confirm the issue is gone? @ina-stoyanova //cc @djyou @sajayantony

ina-stoyanova commented 4 years ago

@msyihchen It's been resolved for now for the US east region. Thank you

sajayantony commented 4 years ago

Thanks for confirming @ina-stoyanova. We are closing this issue for now. Please do open an new item if you experience any issues.

Tynamix commented 4 years ago

In EUW its very very slow right now!

SUNIL23891YADAV commented 4 years ago

Even I am facing the issue in West Europe as of now.

jageshmaharjan commented 4 years ago

Pushing from SouthEast Asia region to West US, and it's been very terrible

(myenv) jugs@jugs:~/Desktop/code/preprocess$ docker push 
 {REGISTRY_PATH}/preprocess:${VERSION_TAG}
The push refers to repository [tfxregistry.azurecr.io/preprocess]
7fe7f21067a7: Pushed 
58b3972e9d31: Pushed 
f2301ac566bf: Pushed 
bf2ff0040125: Pushed 
ab3abbf3d1c7: Pushed 
3d0362cae756: Pushing  3.315MB/1.633GB
61054ece94b3: Pushed 
fff996520c51: Pushing  11.13MB
50841064ee09: Pushing  7.069MB/89.58MB
3d0362cae756: Pushing [>                                                  ]   17.8MB/1.633GB
3ff08d10f5e6: Pushing  3.839MB/1.234GB
ac842eb6d3b2: Waiting                                                                 3d0362cae756: Pushing  9.443MB/1.633GB
50841064ee09: Pushing [================>                                  ]  28.79MB/89.58MB
3ff08d10f5e6: Pushing   18.6MB/1.234GB
50841064ee09: Pushing  28.24MB/89.58MB
fea1675c2731: Pushing   29.5MB/223.3MB
3d0362cae756: Pushing [=>                                                 ]  55.12MB/1.633GB

                                                                                 3ff08d50841064ee09: Pushing  48.84MB/89.58MB
fea1675c2731: Pushing  49.55MB/223.3MB
3ff08d10f5e6: Pushing  79.51MB/1.234GB                                      33d0362cae756: Pushing  54.56MB/1.633GB
3d0362cae756: Pushing  57.35MB/1.633GB
138.8MB/1.234GB
50841064ee09: Pushing  84.26MB/89.58MB
fea1675c2731: Pushing  135.3MB/223.3MB
3ff08d10f5e6: Pushing  147.9MB/1.234GB
f67191ae09b8: Pushed 
b2fd8b4c3da7: Pushed 
0de2edf7bff4: Pushing  27.66MB/117.2MB                                      00de2edf7bff4: Pushing  84.86MB/117.2MB
SteveLasker commented 4 years ago

Hi @jageshmaharjan, Pushing across regions has many challenges, largely outside our control as you're literally on the public internet. This is where geo-replication can be really useful. You push locally, acr does the replication, once, behind the scenes. You get regional webhooks when the image arrives in the other regions.

jageshmaharjan commented 4 years ago

changed the region, seems to be fine. Thanks @SteveLasker

Edblakejitsuin commented 4 years ago

Getting "socket.timeout: timed out" with ACR in West Europe all of today

bramkl commented 4 years ago

Hi all, I've an ACR in region 'West Europe' and deploy my images from an Azure DevOps Pipeline. Today (all day) I get timeouts in my automated build, when the task 'Azure IOT Edge - Push Module Images' runs. (All timeouts in devops are configured to 90 mins, the timeout is generated after 15 mins) This worked fine last week. Any thoughts?

`1d1247eadd4f Pushing [==================================================>] 303.9MB 1d1247eadd4f Pushing [==================================================>] 306.2MB 1d1247eadd4f Pushing [==================================================>] 308.4MB 1d1247eadd4f Pushing [==================================================>] 310.6MB 1d1247eadd4f Pushing [==================================================>] 312.8MB 1d1247eadd4f Pushing [==================================================>] 314.9MB 1d1247eadd4f Pushing [==================================================>] 316.6MB 1d1247eadd4f Pushing [==================================================>] 316.9MB 1d1247eadd4f Pushed ERROR: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out.

[error]Error: The process '/usr/local/bin/iotedgedev' failed with exit code 1

Finishing: Azure IoT Edge - Push module images `

Edblakejitsuin commented 4 years ago

Also getting time out errors again today, ACR in West Europe, running docker-compose push from Azure Devops