GoogleContainerTools / skaffold

Easy and Repeatable Kubernetes Development
https://skaffold.dev/
Apache License 2.0
14.93k stars 1.62k forks source link

Using "skaffold dev" inside gcr.io/k8s-skaffold/skaffold for local developement #4240

Closed MoogyG closed 4 years ago

MoogyG commented 4 years ago

To ensure that all developers of my team use the same version of skaffold, I would like to use the official container for local developement with minikube.

Using a local version of skaffold (1.7.0) with skaffold dev works fine on my machine, every deployment is running.

When I try docker run -v /var/run/docker.sock:/var/run/docker.sock -v "$(pwd):/src" -v "$HOME/.kube:/root/.kube" -v "$HOME/.minikube:$HOME/.minikube" -w /src gcr.io/k8s-skaffold/skaffold:v1.7.0 skaffold dev, then all images built are not accessible from minikube and their status are errImagePull and then ImagePullBackOff .

Why minikube can't get images built inside container and stored on my host?

tstromberg commented 4 years ago

Can you run kubectl describe on the deployments to show where it is trying to pull the pods from?

The output of skaffold dev would also be helpful to understand where the images are being published to.

MoogyG commented 4 years ago

Here is the result when using skaffold dev :

Name:         nginx-6dfffb6574-t2vm6
Namespace:    reline
Priority:     0
Node:         minikube/192.168.39.168
Start Time:   Thu, 21 May 2020 08:12:19 +0200
Labels:       app=nginx
              app.kubernetes.io/managed-by=skaffold-v1.7.0
              pod-template-hash=6dfffb6574
              skaffold.dev/builder=local
              skaffold.dev/cleanup=true
              skaffold.dev/deployer=kubectl
              skaffold.dev/docker-api-version=1.40
              skaffold.dev/run-id=b42ae756-7f46-4ab0-9994-9777233d3c7f
              skaffold.dev/tag-policy=envTemplateTagger
              skaffold.dev/tail=true
Annotations:  <none>
Status:       Running
IP:           10.88.0.131
IPs:
  IP:           10.88.0.131
Controlled By:  ReplicaSet/nginx-6dfffb6574
Containers:
  nginx-container:
    Container ID:   docker://7ec2fb50dee35df4d0ad1b2229e41b6b947d1e02e2e6003ffb52521e89c58268
    Image:          nginx:875408e349117f131bffb262190079eaf0330399657fa2c1ceaf4c23f22866d1
    Image ID:       docker://sha256:875408e349117f131bffb262190079eaf0330399657fa2c1ceaf4c23f22866d1
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 21 May 2020 08:12:21 +0200
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:80/ delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:      http-get http://:80/ delay=5s timeout=3s period=5s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-t7l8b (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-t7l8b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-t7l8b
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age        From               Message
  ----     ------     ----       ----               -------
  Normal   Scheduled  <unknown>  default-scheduler  Successfully assigned reline/nginx-6dfffb6574-t2vm6 to minikube
  Normal   Pulled     2m13s      kubelet, minikube  Container image "nginx:875408e349117f131bffb262190079eaf0330399657fa2c1ceaf4c23f22866d1" already present on machine
  Normal   Created    2m13s      kubelet, minikube  Created container nginx-container
  Normal   Started    2m13s      kubelet, minikube  Started container nginx-container
  Warning  Unhealthy  2m3s       kubelet, minikube  Readiness probe failed: HTTP probe failed with statuscode: 502

And here when using the docker container :

Name:         nginx-7bcd75948d-ng7zg
Namespace:    reline
Priority:     0
Node:         minikube/192.168.39.168
Start Time:   Thu, 21 May 2020 08:15:29 +0200
Labels:       app=nginx
              app.kubernetes.io/managed-by=skaffold-v1.7.0
              pod-template-hash=7bcd75948d
              skaffold.dev/builder=local
              skaffold.dev/cleanup=true
              skaffold.dev/deployer=kubectl
              skaffold.dev/docker-api-version=1.40
              skaffold.dev/run-id=000f1552-6ff2-4f37-aece-c22becd85f8b
              skaffold.dev/tag-policy=envTemplateTagger
              skaffold.dev/tail=true
Annotations:  <none>
Status:       Pending
IP:           10.88.0.137
IPs:
  IP:           10.88.0.137
Controlled By:  ReplicaSet/nginx-7bcd75948d
Containers:
  nginx-container:
    Container ID:   
    Image:          nginx:4301fa8e0dafa6f2b2be41dc7aa0261cc5e7310bcdddf07037416e0e14f72719
    Image ID:       
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ErrImagePull
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:80/ delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:      http-get http://:80/ delay=5s timeout=3s period=5s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qmvqw (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-qmvqw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qmvqw
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  <unknown>          default-scheduler  Successfully assigned reline/nginx-7bcd75948d-ng7zg to minikube
  Normal   Pulling    20s (x2 over 39s)  kubelet, minikube  Pulling image "nginx:4301fa8e0dafa6f2b2be41dc7aa0261cc5e7310bcdddf07037416e0e14f72719"
  Warning  Failed     18s (x2 over 35s)  kubelet, minikube  Failed to pull image "nginx:4301fa8e0dafa6f2b2be41dc7aa0261cc5e7310bcdddf07037416e0e14f72719": rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:4301fa8e0dafa6f2b2be41dc7aa0261cc5e7310bcdddf07037416e0e14f72719 not found: manifest unknown: manifest unknown
  Warning  Failed     18s (x2 over 35s)  kubelet, minikube  Error: ErrImagePull
  Normal   BackOff    6s (x2 over 35s)   kubelet, minikube  Back-off pulling image "nginx:4301fa8e0dafa6f2b2be41dc7aa0261cc5e7310bcdddf07037416e0e14f72719"
  Warning  Failed     6s (x2 over 35s)   kubelet, minikube  Error: ImagePullBackOff

Finally, here is the output from skaffold dev :

Listing files to watch...
 - nginx
 - php-fpm
Generating tags...
 - nginx -> nginx:latest
 - php-fpm -> php-fpm:latest
Checking cache...
 - nginx: Found Locally
 - php-fpm: Found Locally
Tags used in deployment:
 - nginx -> nginx:875408e349117f131bffb262190079eaf0330399657fa2c1ceaf4c23f22866d1
 - php-fpm -> php-fpm:153a13c6c5c353c162ce10f972ce8443c20900b05713ae8cc25ee34e89ad660b
   local images can't be referenced by digest. They are tagged and referenced by a unique ID instead
Starting deploy...
 - namespace/reline created
 - networkpolicy.networking.k8s.io/default-deny-all created
 - service/adminer created
 - ingress.extensions/adminer-ingress created
 - deployment.apps/adminer created
 - service/mysql created
 - persistentvolumeclaim/mysql-pv-claim created
 - deployment.apps/mysql created
 - networkpolicy.networking.k8s.io/mysql-network-policy created
 - service/nginx created
 - ingress.extensions/nginx-ingress created
 - networkpolicy.networking.k8s.io/nginx-network-policy created
 - deployment.apps/nginx created
 - service/php-fpm created
 - networkpolicy.networking.k8s.io/php-fpm-network-policy created
 - deployment.apps/php-fpm created
Waiting for deployments to stabilize...
 - reline:deployment/adminer: waiting for rollout to finish: 0 of 1 updated replicas are available...
 - reline:deployment/mysql: waiting for rollout to finish: 0 of 1 updated replicas are available...
 - reline:deployment/nginx: waiting for rollout to finish: 0 of 1 updated replicas are available...
 - reline:deployment/php-fpm: waiting for rollout to finish: 0 of 1 updated replicas are available...
 - reline:deployment/adminer is ready. [3/4 deployment(s) still pending]
 - reline:deployment/php-fpm is ready. [2/4 deployment(s) still pending]
 - reline:deployment/nginx is ready. [1/4 deployment(s) still pending]
 - reline:deployment/mysql is ready.
Deployments stabilized in 20.539529841s
tstromberg commented 4 years ago

Are you running the docker run command from inside of a minikube VM, or directly from the host? If from the host, my theory is that your dockerized skaffold dev incantation is pushing images to the host Docker daemon, and not the Docker daemon inside of minikube.

This is specifically due to:

Warning Failed 18s (x2 over 35s) kubelet, minikube Failed to pull image "nginx:4301fa8e0dafa6f2b2be41dc7aa0261cc5e7310bcdddf07037416e0e14f72719": rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:4301fa8e0dafa6f2b2be41dc7aa0261cc5e7310bcdddf07037416e0e14f72719 not found: manifest unknown: manifest unknown

Can you include the output of skaffold dev -v=debug in this environment?

Two thoughts:

MoogyG commented 4 years ago

You are right, the problem comes from the command minikube docker-env used internally by skaffold to connect to the minikube docker daemon, I got this warning (with -v=debug) :

time="2020-05-22T22:57:14Z" level=warning msg="Could not get minikube docker env, falling back to local docker daemon: getting minikube env: starting command minikube docker-env --shell none: exec: \"minikube\": executable file not found in $PATH"

It seems to be unsolvable, thanks anyway.