Open ajhalili2006 opened 2 years ago
Thanks @ajhalili2006 -- cache invalidation is hard :)
I wonder if using something similar to the Kubernetes image pull policy would help From https://kubernetes.io/docs/concepts/containers/images/#imagepullpolicy-defaulting
When you (or a controller) submit a new Pod to the API server, your cluster sets the imagePullPolicy field when specific conditions are met:
- if you omit the imagePullPolicy field, and the tag for the container image is :latest, imagePullPolicy is automatically set to Always;
- if you omit the imagePullPolicy field, and you don't specify the tag for the container image, imagePullPolicy is automatically set to Always;
- if you omit the imagePullPolicy field, and you specify the tag for the container image that isn't :latest, the imagePullPolicy is automatically set to IfNotPresent.
@ajhalili2006 could you validate you're still seeing over-aggressive caching of tagged images? Please provide an example with a .gitpod.yml and your image: specification. thanks
@ajhalili2006 could you validate you're still seeing over-aggressive caching of tagged images? Please provide an example with a .gitpod.yml and your image: specification. thanks
Sorry for an late reply, but yes, it's still cached if I'm using the latest
tag.
Speaking of the image spec, I made some changes to the Dockerfile to help in debugging the situtation, which involves using build arguments[^1] with some useful information accessible through either docker inspect
or env | grep <prefix>
on the container.
[^1]: Currently I use the plain dockerbuild
here, but if I use Dazzle, then I need to write an script to handle these as per https://github.com/gitpod-io/dazzle/issues/37.
And on the Gitpod config: https://gitlab.com/gitpodify/gitpodified-workspace-images/-/blob/7adc206c76e82ea228a6e6e9d651d665e054dc56/.gitpod.yml
Passing to workspace team for tracking with work on new workpace image builds and replacement of latest tag.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hey all,
I don't think this issue should be closed. latest
images are still being cached incorrectly, and this is not behavior that anyone would expect using the default imagePullPolicy.
Hey @jkaye2012! Let me reopen this and loop in some fellow team members from the corresponding team in case this is something worth investigating, triaging, or updating. Cc @kylos101 @atduarte
Thank you. This has bit us a few times over the past few months. Changing our base image isn't a very frequent operation, but whenever we do change it we end up in a situation where our pods fail in sporadic ways for multiple days as we cannot rely on the new version being reliably run for new pods.
Hey @gtsiolis thank you for reopening! To follow our groundwork process, I also added this issue to our inbox.
Hey all, any progress on this? Currently we have an image that has been cached for 9 days now. We have not been able to find any way to get around this without changing the tag (which is not something that we want to do as it would require multiple commits any time that our base image is updated).
Same issue here, even a button to manually clear cache at an account level would be greatly appreciated.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This still requires attention.
On Wed, Oct 12, 2022, 4:34 AM stale[bot] @.***> wrote:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
— Reply to this email directly, view it on GitHub https://github.com/gitpod-io/gitpod/issues/7149#issuecomment-1275949074, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA7FL7CZN6YTASZ5WHJAEU3WC2HZ7ANCNFSM5JWRMEUQ . You are receiving this because you were mentioned.Message ID: @.***>
Just spent a good few hours trying to figure out why (even with incremental prebuilds turned off) the latest version of our image wasn't being used.
This could really do with improvement (and not just for latest
, we use branch names like php-8.1
(the contents of which changes over time without a new image tag).
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue shouldn't be considered stale.
Is there any update on this? It's been over a year now that GitPod is not adhering to very basic image caching protocols. latest
should never be cached. Every time we make an image change our developers have random failures until the faulty cache is flushed.
👋 @jkaye2012 sorry about that. For now I could suggest to:
or
Forced rebuilds do not help! It will still use the cached version of the base image.
And the image we are using does not have tags other than latest.
So what now? This is a huge issue!
Hi @jpfeuffer, which base image are you using?
My own: ghcr.io/openms/contrib:latest
@jpfeuffer cool. Can you please also share the contents of your .gitpod.yml
and .gitpod.Dockerfile
(if exists). Or even a link to your public repo would work.
@jpfeuffer thanks for sharing your image address.
You could use the sha256 digest of your image instead, I copied it from https://github.com/openms/contrib/pkgs/container/contrib
.gitpod.yml
:image: ghcr.io/openms/contrib@sha256:ab301bf0858923b5c14349b38e5796bf341a838141eea077048a1df3fcc935be
.gitpod.yml:
image:
file: .gitpod.Dockerfile
.gitpod.Dockerfile:
FROM ghcr.io/openms/contrib@sha256:ab301bf0858923b5c14349b38e5796bf341a838141eea077048a1df3fcc935be
# Do more stuff ....
Tip: Run
gp validate
command to quickly test.
This is not a solution to this problem. This has been outstanding for well over a year at this point - what is GitPod working towards to fix this? It's a serious issue that flies on the face of docker and k8s best practices and documentation.
On Mon, Aug 28, 2023, 8:24 AM AXON @.***> wrote:
@jpfeuffer https://github.com/jpfeuffer thanks for sharing your image address.
You could use the sha256 digest of your image instead, I copied it from https://github.com/openms/contrib/pkgs/container/contrib
When directly using from .gitpod.yml:
image: @.***:ab301bf0858923b5c14349b38e5796bf341a838141eea077048a1df3fcc935be
When using a custom dockerfile
.gitpod.yml:
image: file: .gitpod.Dockerfile
.gitpod.Dockerfile:
FROM @.***:ab301bf0858923b5c14349b38e5796bf341a838141eea077048a1df3fcc935be
Do more stuff ....
— Reply to this email directly, view it on GitHub https://github.com/gitpod-io/gitpod/issues/7149#issuecomment-1696143934, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA7FL7FUQF4AMX4ADRSOAHLXXTO4BANCNFSM5JWRMEUQ . You are receiving this because you were mentioned.Message ID: @.***>
@axonasif I see. Yes I might use that hack, but ideally I don't want to change the hash manually everytime my base image is updated. I'm okay with pressing a button on your web interface if you need to save bandwidth. However, this image wasn't updated for a looooong long time by gitpod, so I'm wondering if gitpod is ever pulling new versions without manual intervention.
About this Issue
On projects using custom workspace images either through
image
key in the configuration file or through their configured custom workspace Dockerfile, when Gitpod first found a workspace image, assuming container image repositories are checked by tags, are not in its local registry proxy, it'll pull from whenever the registry the image is located and then cached it aggressively.In my case, I maintain a fork of
gitpod-io/workspace-images
and use Red Hat Quay Container Registry's built-in image builder (instead of using Dazzle in GitLab CI which currently I implement ShellCheck + Hadolint checks for a while) for all the images within thequay.io/gitpodified-workspace-images/*
namespace and then use it on my own projects. The problem is whenever I want to update the config file, Gitpod uses the cached version of the workspace image (possibly to save bandwidth and to avoid rate limits for unauthenticated pulls like in Docker Hub but maybe not in others) and things just went into chaotic errors.Suggestion
Run Prebuild
with a boolean option calledPull latest manifest
when ticked, it will pull the latest image manifest first.pullLatestManifest=true
URL parameter on both manual prebuild URLs and webhook endpoints (e.g.https://gitpod.io/#prebuild/https://gitlab.com/gitpodify/gitpodified-workspace-images?pullLatestManifest=true
andhttps://gitpod.io/apps/gitlab/?pullLatestManifest=true
Workarounds
Like the Gitpod team is doing, I can change the
image
key of it every time and wait for prebuilds to finish.Currently, there's no prefix for branches yet due to me slapped the
Tag manifest with the branch or tag name
box and no additional tag template in form ofbranch-${parsed_ref.branch}
as I reproduced it below.