Open bestofman opened 1 year ago
Nobody knows about the problem?
Apologies for no response on this.
From the log message:
DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment orders-depl --namespace default --watch=false] subtask=-1 task=Deploy
It seems that the value of the kubernetes context is not getting set correctly. Can you verify that the kubernetes context is set correctly(see https://skaffold.dev/docs/environment/kube-context/).
I'm also having the same issue with kubeadm, running with the correct context.
I ran into the same issue Ubuntu 22, correct context is set both via yaml and cli, tried both ways but it's showing the same issue.
Here are my thoughts. Upon further investigation it might have something to do with docker contexts. If you want kubernetes on Linux you have to install (additionally) docker desktop, which brings with it another context (daemon). The docker documentation explains this, but I'm not sure if this creates the issue with skaffold. In my organization setting the proper context to docker-desktop and having push: false works on macs. The exact same skaffold.yaml fails on Linux and throws this error. I believe it's because of the existence of double daemons on Linux.
To see if I can get around the error I tried manually changing the docker context, but that didn't get me anywhere.
I also faced the same problem. My issue was that I was manually specifying ImagePullPolicy in K8s Deployment config. After removing it, the error was gone.
Deleting the cache file under ~/.skaffold/cache
fixed the problem for me on v2.5.0
.
Unfortunately doing that causes all images to be rebuilt even if they were unchanged (and it didn't work)
Just putting this here in the hopes that it helps a future visitor. I had this issue and I was rather baffled until I read the previous comments about context. Many thanks for the clues!
Turns out I had made a naming mistake in my .envrc file. (I'm using direnv to set my environment variables.)
$ cat .envrc
kubectl config set-context etmc --cluster=etmc
kubectl config use-context etmc
export KUBECONFIG="$(k3d kubeconfig write etmc)"
export DOCKER_HOST=unix:///var/run/docker.sock
I was using kubectl to set context to etmc. This was wrong because k3d creates it's context names prefixed with k3d-
.
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
etmc
* k3d-etmc k3d-etmc admin@k3d-etmc
I looked in the file generated by k3d kubeconfig write etmc
and it looks like context is set in there. Both kubectl
lines in my .envrc were redundant, then. I changed it to the following and now I'm able to use Skaffold without the deployment errors.
$ cat .envrc
export KUBECONFIG="$(k3d kubeconfig write etmc)"
export DOCKER_HOST=unix:///var/run/docker.sock
EDIT: I think I spoke too soon. This didn't solve my STATUSCHECK_IMAGE_PULL_ERR issue.
I am trying to deploy my NodeJS application on a local Kubernetes cluster, using
skaffold
but I get the following result:This is the
expiration-depl.yaml
:And this is the
expiration-redis-depl.yaml
:Information