Open 0x-shanks opened 3 years ago
It only looks at the repo:tag, so it will only see that "alpine:test" is already there.
docker images --format {{.Repository}}:{{.Tag}}
There is no matching "AlwaysPullImages" option (yet?), it assumes unique tags
There is some old broken code to check id/digest
I see, so even if the Image ID changes, if the repo:tag is the same, reloading won't help, and if you want to use a different image, you have to add a unique tag?
So when is reload used?
There might be a bug (change of behaviour) in the "cache" command, after adding the preload...
It now checks the image list (repo:tag), even before comparing the Id and/or Digest whatsoever.
Images are preloaded, skipping loading
It would also add any missing images, or any that was added after the cluster was initially created
Problem here is that an image on disk doesn't have any Id or Digest, that is added by the daemon/registry.
So you need to either load it locally (eew) or talk to the registry (eeew), whichever is worst (we do both of them).
There is also some compatibility issues with crane
and the library behind it: https://github.com/google/go-containerregistry/issues/890
At the moment, we can't even agree on the name of the container image. Much less on the actual contents.
Short-term fix will probably be to re-visit the preload code and maybe add an --always flag to minikube cache.
Assuming that the daemon/registry code for digest even works, I have some doubts about that:
@afbjorklund Thank you for your kindness. I see that you have to add Id or Digest to the image list, not just repo:tags.
Short-term fix will probably be to re-visit the preload code and maybe add an --always flag to minikube cache.
As you said, it's short term, but I'm going to try to implement it this way, but I've never contributed to minikube, so I don't know the difficulty level. Would it be better to wait quietly for the Id or Digest to be added?
Sad thing is that neither Id nor Digest will work with all container runtimes and also offline (without a registry)...
I added a new feature to clean up the patched library, we can use this particular issue to go past the name only ?
Hello @afbjorklund
I'm in the same boat. Do you recommend any workarounds? I see the cached file in ~/.minikube/cache/images/localhost_5000
localhost_5000
is my local registry. When I run minikube cache delete localhost:5000/repo:tag
it does remove that file. But if I do minikube cache add localhost:5000/repo:tag
, it will still add the previous image even though the image is updated in the local registry. So it seems like it is caching the old image somewhere else too.
If the image already exists in the docker daemon, it will not pull anything from the registry.
1) https://pkg.go.dev/github.com/google/go-containerregistry/pkg/v1/daemon#Image
2) https://pkg.go.dev/github.com/google/go-containerregistry/pkg/v1/remote#Image
It actually always checks with dockerd, even if there no such thing running on the host.
# daemon.Image
retrieving image: busybox:latest
daemon lookup for busybox:latest: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
# remote.Image
cache image "busybox:latest" -> "/home/anders/.minikube/cache/images/busybox" took 2.80001642s
save to tar file busybox -> /home/anders/.minikube/cache/images/busybox succeeded
So the current workaround would be docker pull localhost:5000/repo:tag
That will update the image in the daemon with the new one from the registry.
@afbjorklund Thanks for the response. Btw I see a new warning message on Minikube v1.18.1
"minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
But the docs don't mention minikube image load: https://minikube.sigs.k8s.io/docs/handbook/pushing/
@seperman : I'm actually not sure why @priyawadhwa wrote that "cache" would be deprecated e8945f293448b3a4c84875ee12e4c81b314c2daa
They don't do exactly the same thing, even if they do share some code. The "cache" command has a memory
However, the "image load" and "image build" commands now have slightly better documentation (placeholders)
@priyawadhwa Would you please explain why you committed the warning that 'cache add' would be deprecated as per the above?
Your help is appreciated.
Also if you could briefly explain the difference of minikube image load
vs. minikube cache add
, that would be great. It looks like the cache command, caches the image in ~/.minikube
in addition to having it in the minikube's docker daemon. While the minikube image load
only adds the image to the minikube's docker daemon. I'm curious what is a use case where you want to use minikube cache
vs. minikube image load
. Thanks.
@mprimeaux @seperman
I think Priya has handed over the feature to other members of the team, so I'm not sure if she's able to answer (but please do if you can). Both commands are using the ~/.minikube cache folder for caching images that they pull from the daemon or from remote.
But they come from different backgrounds:
minikube cache
ties into the possibility of minikube to add more images to be installed by default in the cluster, such as the kubernetes dashboard. It maintains a list in the configuration, that is pre-populated when starting a new cluster.
Available Commands: add Add an image to local cache. delete Delete an image from the local cache. list List all available images from the local cache. reload reload cached images.
minikube image
is more of a replacement for the docker-env command, so that one can load and build images without needing a docker
client - and also works with different container runtimes, such as containerd (with the help of buildkitd)
Available Commands: build Build a container image in minikube load Load a image into minikube rm Remove one or more images ls List images in minikube
So the main difference between "cache add" and "image load", is if it is added to the minikube config as a memory or not.
$ minikube cache list
alpine:latest
busybox:latest
$ more /.minikube/config/config.json
{
"cache": {
"alpine:latest": null,
"busybox:latest": null
}
}
The main problem in this issue, is that the current code only checks the name and the tag to see if it "already exists".
There is no equivalent to the "pull policy" of Kubernetes, where you can choose to replace the already existing images:
imagePullPolicystring | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images
And the current handling of the short names and the id/digest is also rather confusing and could be improved upon.
@afbjorklund Thanks much for this detail; very helpful. The 'memory' attributes are the primary attraction of cache add
as this allows for full offline usage even after minikube delete
is executed.
I agree that evolving minikube image load
with an imagePullPolicy
equivalent represents a material improvement. In addition to offline support, saving the images represents a huge 'boost' in velocity for software engineers in their 'day to day' work. As an example, we delete our minikube clusters many times per day during and thus are still using cache add
.
We are currently discussing how to be best handle the Id and Digest discrepancies, between different methods and runtimes. Something like "replace always" (or never) is simple enough, but eventually we want to "compare" two different images.
This is the needsTransfer
function, that tries to determine if the local image is different from the one present in the cluster...
Some of the current methods will be quite "expensive" for tarball and for remote, when it comes to how long they take to run.
For minikube image load
, using the image cache just happens to be the current implementation...
Theoretically it could stream the images to the cluster right from the local daemon or the remote registry.
Thanks for the clarification.
@afbjorklund Above you said:
So the current workaround would be
docker pull localhost:5000/repo:tag
That will update the image in the daemon with the new one from the registry.
Can you clarify what localhost
is and which daemon & registry you are talking about here? So if I want to pull an updated version of a particular repo:tag
from my host's daemon/registry into minikube, I'd minikube ssh
and then do docker pull localhost:5000/repo:tag
? If that's the case, I'm assuming the minikube VM has port 5000 tunneled to the host's registry somehow? Or am I misunderstanding?
I ask because I'm not getting any type of successful result from inside a minikube ssh
session or from a host shell after doing eval $(minikube docker-env)
. In both cases, I get a connection refused on port 5000.
Rereading, maybe I've misunderstood the context above. I'm trying to update the image in minikube but it isn't updating as detailed above. In my case, I've got an external registry. Normally for a remote k8s install I'd build the image and push it to the external registry, then deploy on k8s (which has access to talk to the registry). But to test locally on minikube, I don't want to push the untested, potentially incomplete image. I want to push it into minikube for testing first. So,
$ docker build -t some.other.registry/image:tag .
$ minikube cache add some.other.registry/image:tag
$ # OR minikube image load some.other.registry/image:tag
And minikube still has the prior version. I deleted the image from minikube (verified gone with cache and image commands), deleted it from my host, rebuilt it, pushed again (cache add or image load)... the old image version comes back.
The only way I was able to get it to work was to do an eval $(minikube docker-env)
before the build, but that doesn't work for some of our images which need to pull their base image from the upstream registry over VPN (minikube's dockerd doesn't route its traffic through the host's VPN). So, it isn't clear to me how I can rebuild and retest these images.
And minikube still has the prior version. I deleted the image from minikube (verified gone with cache and image commands), deleted it from my host, rebuilt it, pushed again (cache add or image load)... the old image version comes back.
@dbingham I guess it's coming from the ~/.minikube/cache/images/
directory then. I had the same issue getting a SNAPSHOT image, built in the host docker daemon, to be updated in minikube (doing minikube image rm
and minikube docker env docker rmi
before). I had to manually delete the corresponding image file in the ~/.minikube/cache/images/
directory before doing minikube image load
.
I found that minikube cache add
adds the image file to that directory, and minikube cache delete
removes it from there.
minikube image load
also adds the image file there, but minikube image rm
doesn't remove it from there. (I'm using minikube v1.19.0.)
@afbjorklund Isn't it a bug that images are not removed from ~/.minikube/cache/images/
when using minikube image rm
?
I found that
minikube cache add
adds the image file to that directory, andminikube cache delete
removes it from there.minikube image load
also adds the image file there, butminikube image rm
doesn't remove it from there. (I'm using minikube v1.19.0.) @afbjorklund Isn't it a bug that images are not removed from~/.minikube/cache/images/
when usingminikube image rm
?
Probably ill-defined, at least. Actually I don't know why the image commands are using the cache directory at all.
But now that they do, it should probably have matching semantics like you say - remove it from cache, if it adds it.
@dbingham
Can you clarify what
localhost
is and which daemon & registry you are talking about here? So if I want to pull an updated version of a particularrepo:tag
from my host's daemon/registry into minikube
This was for the cluster internal registry, as from the add-on. I don't think we support a host registry yet, but it would be a nice feature to have. It would be a better option, than connecting to an external host daemon or podman. Sorry for the late reply.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Steps to reproduce the issue:
Pull the alpine:3.11 and alpine:3.12 image and tag alpine:3.11 as test
Add a cache with the following command
minikube cache add alpine:test
Enter the minikube container and check if the image exists
Tag alpine:3.12 as test
Reload the cache
minikube cache reload alpine:test
I check the image again, but it hasn't changed
Optional: Full output of
minikube logs
command: