gliderlabs / docker-alpine

Alpine Linux Docker image. Win at minimalism!
http://gliderlabs.viewdocs.io/docker-alpine
BSD 2-Clause "Simplified" License
5.7k stars 528 forks source link

Temporary error (try again later) #334

Open rommik opened 7 years ago

rommik commented 7 years ago

I run into this issue when building a docker image on Ubuntu Host. The Same build on Windows 10 using docker-tools CLI (so technically inside a VirtualBox VM) has no issues and Docker images are built correctly.

I have reinstalled Docker on my Ubuntu completely to have a fresh version just in case. Any suggestions what else I can do to troubleshoot this issue?

Build command output

Sending build context to Docker daemon  572.4kB
Step 1/15 : FROM alpine:edge
edge: Pulling from library/alpine
cc5efb633992: Pull complete 
Digest: sha256:2b796ae57cb164a11ce4dcc9e62a9ad10b64b38c4cc9748e456b5c11a19dc0f3
Status: Downloaded newer image for alpine:edge
 ---> f96c4363411f
Step 2/15 : RUN apk add --update nodejs
 ---> Running in 12865b082f34
fetch http://dl-cdn.alpinelinux.org/alpine/edge/main/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/edge/main: temporary error (try again later)
WARNING: Ignoring APKINDEX.066df28d.tar.gz: No such file or directory
fetch http://dl-cdn.alpinelinux.org/alpine/edge/community/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/edge/community: temporary error (try again later)
WARNING: Ignoring APKINDEX.b53994b4.tar.gz: No such file or directory
ERROR: unsatisfiable constraints:
  nodejs (missing):
    required by: world[nodejs]
The command '/bin/sh -c apk add --update nodejs' returned a non-zero code: 1

My dockerfile

FROM alpine:edge
RUN apk add --update nodejs

#create an app directory
RUN mkdir -p /app
WORKDIR /app

# create some useful folders
RUN mkdir -p /data

# Copy apps build in another folder
COPY main.js /app
COPY package.json /app
COPY version.json /app
COPY dist /app/dist
COPY data /data
COPY app_modules /app/app_modules
COPY node_modules /app/node_modules

#expose the port
EXPOSE 5555

#start the app
#ENV NODE_ENV=development
#ENV debug=true

WORKDIR /app

# comment if need a console in the container
CMD [ "node", "main.js" ]
dawhc commented 2 years ago

I used the host network mode to run alpine and the issue was solved:

docker run -it --network host alpine
codeninja-ru commented 2 years ago

this helped me https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.13.0#musl_1.2

there are several solutions

so I had to download default.json, change one line and pass --security-opt=seccomp=default.json to docker run

lovecoding-git commented 2 years ago

@andyshinn , RE: #334 (comment)

It was a DNS error for me. By setting /etc/docker/daemon.json with,

{
  "dns": ["8.8.8.8"]
}

and then restarting docker with,

sudo service docker restart

I was able to build images again.

-In Windows

C:\Users\Administrator\.docker\daemon.json

And add this line.

{
    ...
    "dns": ["8.8.8.8"]
}
fullbl commented 1 year ago

it happens to me if I run docker compose up, but it doesn't if I run docker-compose up

blockloop commented 1 year ago

My fix for drone was setting the repository as Trusted under the repo settings in drone and then adding this to the docker build steps:

steps:
  -
    name: docker
    image: plugins/docker
    network_mode: host   <----- this 
Startouf commented 1 year ago

Similar error happen to me when I try to build docker images from within my kubernetes cluster but my workflow is more complex

  1. Locally I build a small docker image with awscli/docker/bash/git
FROM docker:latest
# Update & Upgrade OS
RUN apk update
RUN apk upgrade
# https://github.com/aws/aws-cli/issues/4971#issuecomment-1330633153
COPY --from=devopscorner/aws-cli:latest /usr/local/aws-cli/ /usr/local/aws-cli/
COPY --from=devopscorner/aws-cli:latest /usr/local/bin/ /usr/local/bin/
RUN apk add bash git
  1. My jenkins (deployed itself on a kubernetes cluster) has some pipelines that spawn some pods that grab this image
node(POD_LABEL) {

  stage('Clone') {
    git url: 'https://github.com/example/example/', branch: '${build_branch_name}', credentialsId: 'github-app'

    container('aws-dockerizer') {
      stage('Build and deploy') {
        withAWS(credentials: 'aws-credentials', region: 'eu-central-1') {
          sh '''#!/usr/bin/env bash
            git config --global --add safe.directory ${WORKSPACE}
            scripts/build_and_push_docker_image_to_aws.sh
          '''
        }
      }
    }
  }  

}
  1. The script runs a docker build step that builds my application image with some dependencies, here is an extract of the Dockerfile being built
FROM ruby:3.1.2-alpine
RUN apk add --update --no-cache   binutils-gold   build-base   curl   file   g++   gcc   git   less   libc-dev   htop   libffi-dev   libgcrypt-dev   libstdc++   libxml2-dev   libxslt-dev   linux-headers   make   netcat-openbsd   nodejs   yarn   openssl   pkgconfig   postgresql-dev   ruby-full   tzdata

I run into the following errors from the jenkins-spawned pod

#6 4.372 fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/main/x86_64/APKINDEX.tar.gz
#6 9.404 fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/community/x86_64/APKINDEX.tar.gz
#6 9.404 WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.16/main: temporary error (try again later)
#6 14.41 WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.16/community: temporary error (try again later)

So I don't really have a sysadmin-like access, my image is being built from a pod connected to docker via a volume mount on the docker socket. It make it hard to run any OS/config patch

apiVersion: v1
kind: Pod
spec:
  containers:
  - name: aws-dockerizer
    image: example/aws-dockerizer:0.1.3
    command: ['cat']
    tty: true
    volumeMounts:
    - name: dockersock
      mountPath: /var/run/docker.sock
  volumes:
  - name: dockersock
    hostPath:
      path: /var/run/docker.sock
Startouf commented 1 year ago

Since there are times when the bridge is disturbing, it is successful if you execute it as host. docker build -t hoge:latest . --network=host

After struggling for so long with what I though was a kubernetes specific docker-in-docker issue, I ended up adding the --network=host in my scripts commands and it fixed it. I should have tried that first instead of going nuts.

overbost commented 1 year ago

I have the same issue with "slow" internet connection. Is "slow" for Docker, maybe it have low parameter for "temporary error"

exotexot commented 1 year ago

Still happening...

axisofentropy commented 11 months ago

I also had this issue randomly building ingress-nginx images on Docker Desktop.

0.260 fetch https://dl-cdn.alpinelinux.org/alpine/v3.18/main/x86_64/APKINDEX.tar.gz                  
5.295 WARNING: updating https://dl-cdn.alpinelinux.org/alpine/v3.18/main: temporary error (try again later)                                                                                               
5.295 fetch https://dl-cdn.alpinelinux.org/alpine/v3.18/community/x86_64/APKINDEX.tar.gz
10.41 WARNING: updating https://dl-cdn.alpinelinux.org/alpine/v3.18/community: temporary error (try again later)

I added the "dns": ["8.8.8.8"] line to the Docker Desktop config and now it builds consistently.

dvdn commented 11 months ago

Same issue trying to build python:3.11-alpine. Solved by adding another dns, like '9.9.9.9' (Quad9), in my existing list in /etc/docker/daemon.json :

{
        "dns": ["42.xx.xx.42", "9.9.9.9"]
}

as described in previous comment > https://github.com/gliderlabs/docker-alpine/issues/334#issuecomment-1135351701

build630 commented 7 months ago

Any recommendations to address it ?

So far, I have tried

  1. Enabling UnsafeLegacyRenegotiation in /etc/ssl/openssl.cnf
  2. Adding DNS to /etc/docker/daemon.json
ashikzubair-pge commented 7 months ago

Since there are times when the bridge is disturbing, it is successful if you execute it as host. docker build -t hoge:latest . --network=host

After struggling for so long with what I though was a kubernetes specific docker-in-docker issue, I ended up adding the --network=host in my scripts commands and it fixed it. I should have tried that first instead of going nuts.

where did u add the netwrok =true ? is that in the jenkins agent config command?

Zerwin commented 5 months ago

Ignore everything below, it was an IPV6 problem, not related


Figured it out what caused it for me. I had enabled IPV6 on my host and that caused the only DNS server in /etc/resolv.conf to also be IPV6. Disabling IPV6 again caused the build to run through. I could pretty much consistently trigger a failed/successful build by switching between an IPV6 DNS server and an IPV4 DNS server.

Some more info: OS: Debian 12 OCI: Podman Network changes relevant: sudo nano /etc/network/interfaces and then adding

iface eth0 inet6 dhcp
        dhcp6-pd yes

(Replace eth0 with whatever your actual network interface is) Then just restart networking (sudo service networking restart)

As soon as you get the files once it won't reproduce until you clear all the build caches.

OreoProMax commented 4 months ago

I had this error with Gitlab, i just change docker build command to run with --network host option and it works!

works for me

ygxxii commented 4 months ago

In my case, I got this error when ca-certificates package is not installed and set /etc/apk/repositories to a HTTPS mirror, e.g.:

https://mirrors.aliyun.com/alpine/v3.18/main
https://mirrors.aliyun.com/alpine/v3.18/community

To work around, use HTTP mirror:

RUN sed -e 's/https:\/\/dl-cdn.alpinelinux.org/http:\/\/mirrors.aliyun.com/g' \
        -i /etc/apk/repositories \
    && apk add --no-cache tzdata
virtualbeck commented 2 months ago

I used a borrowed retry function to get past this:

function retry {
  local retries=$1
  shift

  local count=0
  until "$@"; do
    exit=$?
    wait=$((2 ** $count))
    count=$(($count + 1))
    if [ $count -lt $retries ]; then
      echo "Retry $count/$retries exited $exit, retrying in $wait seconds..."
      sleep $wait
    else
      echo "Retry $count/$retries exited $exit, no more retries left."
      return $exit
    fi
  done
  return 0
}
retry 10 apk add --no-cache aws-cli jq
aliceisjustplaying commented 4 days ago

Another data point for those who stumble upon this thread: my issue was with Tailscale, needed to turn off stateful filtering with tailscale up --stateful-filtering=false

See https://tailscale.com/s/stateful-docker