Open mabey01 opened 4 years ago
Thanks for opening this @mabey01 - can you share your Dockerfiles?
The way artifact caching works is that we calculate the source digest based of the source files that are the dependencies for the Dockerfile ADD/COPY
commands.
Does a simple skaffold build
twice (without any changes) also result in two rebuilds?
Having the same issue:
VERSION=$VERSION_TO_BUILD skaffold build -f skaffold.yaml $services
VERSION=latest skaffold build -f skaffold.yaml $services
apiVersion: skaffold/v2beta1
kind: Config
build:
artifacts:
- image: xxxx.dkr.ecr.xxx.amazonaws.com/sagahead-io/hector/api-gateway
context: .
kaniko:
dockerfile: microservices/api-gateway/Dockerfile
- image: xxxx.dkr.ecr.xxx.amazonaws.com/sagahead-io/hector/api-gateway-deprecated
context: .
kaniko:
dockerfile: microservices/api-gateway-deprecated/Dockerfile
- image: xxxx.dkr.ecr.xxx.amazonaws.com/sagahead-io/hector/api-gateway-reverse-proxy
context: .
kaniko:
dockerfile: microservices/api-gateway-reverse-proxy/Dockerfile
cluster:
dockerConfig:
secretName: jenkins-docker-cfg
namespace: jx
tagPolicy:
envTemplate:
template: '{{.IMAGE_NAME}}:{{.VERSION}}'
deploy:
kubectl: {}
Skaffold builds 2 times, as I understand caching from remote is not working.
Having the same issue. I'm tagging all my images with the same tag. For example this Docker image:
FROM golang:1.13.6-alpine3.11 AS builder
COPY xml-envsubst.go .
ENV BUILD_DEPS="gettext=0.20.1-r2" \
RUNTIME_DEPS="libintl=0.20.1-r2" \
GO_ENABLED=0 \
GOOS=linux
RUN apk add --update $RUNTIME_DEPS && \
apk add --virtual build_deps $BUILD_DEPS && \
cp /usr/bin/envsubst /usr/local/bin/envsubst && \
apk del build_deps
RUN go build -ldflags '-w -s' -a -o /go/bin/xml-envsubst && chmod +x /go/bin/xml-envsubst
FROM alpine:3.11
COPY --from=builder /go/bin/xml-envsubst /bin/xml-envsubst
COPY --from=builder /usr/lib/libintl.so.8 /usr/lib/
COPY --from=builder /usr/lib/libintl.so.8.1.6 /usr/lib/
COPY --from=builder /usr/local/bin/envsubst /bin/envsubst
It's generated always with the same tag:
myregistry.com/deployment/xml-envsubst:sit@sha256:1a6b5388f3fad3a60a42b231f9fee75139345e2e41954c72d37b008478144a30
But skaffold never uses the remote cache:
2020-04-17 15:25:40 Generating tags...
2020-04-17 15:25:40 - xml-envsubst ->myregistry.com/deployment/xml-envsubst:sit
2020-04-17 15:25:40 Checking cache...
2020-04-17 15:25:53 - xml-envsubst: Not found. Building
Edit: I've this issue if I use kaniko. (the skaffold cache file is always empty). If I use dockerCli then cache is working as expected.
Edit: I've this issue if I use kaniko. (the skaffold cache file is always empty). If I use dockerCli then cache is working as expected.
@mabey01 seeing this issue with kaniko makes sense. kaniko builds images on cluster. If you want to enable caching, you can use --cache-repo=gcr.io/project
it will cache intermediate layers and push them to remote cache gcr.io/project
.
Next time, kaniko builds the image, it will check if the layer exists on remote cache.
With kaniko, remote cache is not synced with local skaffold cache.
From the skaffold logs, it will appear cache is not used and maybe this is something skaffold team can work on.
However, when image build happens in kaniko pod with --cache=true
and --cache-repo=<>
, the pod logs will show cache used for unchanged layers.
Hope that explains things a bit.
I use Skaffold v2.0.4 in 2023 and also have this issue. This would be such a nice feature to speed up development and CI/CD.
I tried to workaround the problem with command
skaffold build -q --dry-run | jq '.builds[].tag' -r | xargs -I {} docker pull {} || true
which pulls all images beforehand if they exist.
It worked like a charm and after running the images with the exact tags are downloaded and present locally.
But even this workaround does not impress Skaffold. Skaffold still reports Not found. Building
:/ .
I prepared a private GitHub repo that provides steps to reproduce this issue: @balopat https://github.com/ngraf/skaffold-issue-3849
@ngraf et al if you are having this issue using a local docker builder you can try tagger TreeSha or inputDigest and tryImportMissing I notice that you are manually specifying the tags though - asking skaffold to find a cache that it didn't build. not sure how well that will work
I would suggest to close this issue as the original author @mabey01 hasn't replied in 3 years, and the other comments are using a variety of different builders. AFAICT caching is working as intended
Im trying to speed up build time for my project. I have a monorepo inlcuding frontend and backend and use the
TreeSha
tagger to detect changes made to each component individually. If i make frontend changed, backend is tagged with the same tag as before and therefore should skip the building process, but it doesnt.Expected behavior
After tagging, skaffold checks cache to see if the image (same tag) already exists and skips build.
Actual behavior
skaffold doesnt find cached image in remote registry and builds it everytime (same tag). When i check in the google cloud registry i can find the image with the exact same tag
Information
Steps to reproduce the behavior
skaffold build -p production --default-repo=gcr.io/$PROJECT_ID