smart-edge-open / converged-edge-experience-kits

Source code for experience kits with Ansible-based deployment.
Apache License 2.0
37 stars 40 forks source link

ErrImageNeverPull while deploying sample app #38

Closed pavanats closed 4 years ago

pavanats commented 4 years ago

Hi, I am trying to deploy a sampleApp following the steps provided on the openness site. I come across ErrImageNeverPull error. Now that we have deployed OpenNESS successfully, it will help if we can get on call to discuss few things. Output of Kubectl describe is provided below:

kubectl describe pods producer-685fcbc569-swc8r

Name: producer-685fcbc569-swc8r Namespace: default Priority: 0 Node: node01/146.0.237.30 Start Time: Tue, 21 Jul 2020 12:58:43 +0200 Labels: app=producer pod-template-hash=685fcbc569 Annotations: ovn.kubernetes.io/allocated: true ovn.kubernetes.io/cidr: 10.16.0.0/16 ovn.kubernetes.io/gateway: 10.16.0.1 ovn.kubernetes.io/ip_address: 10.16.0.16 ovn.kubernetes.io/logical_switch: ovn-default ovn.kubernetes.io/mac_address: 0e:4f:1d:10:00:11 Status: Pending IP: 10.16.0.16 IPs: IP: 10.16.0.16 Controlled By: ReplicaSet/producer-685fcbc569 Containers: producer: Container ID: Image: producer:1.0 Image ID: Ports: 80/TCP, 443/TCP Host Ports: 0/TCP, 0/TCP State: Waiting Reason: ErrImageNeverPull Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-xqj7r (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-xqj7r: Type: Secret (a volume populated by a Secret) SecretName: default-token-xqj7r Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node-role.kube-ovn/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message


Warning ErrImageNeverPull 2m47s (x5088 over 18h) kubelet, node01 Container image "producer:1.0" is not present with pull policy of Never

Out of Kubectl get pods -o wide -A is shared below:

[root@controller ~]# kubectl get pods -o wide -A NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cdi cdi-apiserver-885758cc4-f8g74 1/1 Running 0 19h 10.16.0.8 node01 cdi cdi-deployment-5bdcc85d54-f6cs5 1/1 Running 0 19h 10.16.0.24 node01 cdi cdi-operator-76b6694845-dzcmq 1/1 Running 0 20h 10.16.0.9 node01 cdi cdi-uploadproxy-89cf96777-fk296 1/1 Running 0 19h 10.16.0.25 node01 default producer-685fcbc569-swc8r 0/1 ErrImageNeverPull 0 18h 10.16.0.16 node01 kube-system coredns-66bff467f8-snzlh 1/1 Running 0 21h 10.16.0.3 controller kube-system coredns-66bff467f8-vvtlw 1/1 Running 0 21h 10.16.0.2 controller kube-system descheduler-cronjob-1595395440-fd2qr 0/1 Completed 0 6m5s 10.16.0.26 node01 kube-system descheduler-cronjob-1595395560-rtksf 0/1 Completed 0 4m5s 10.16.0.34 node01 kube-system descheduler-cronjob-1595395680-cq569 0/1 Completed 0 2m4s 10.16.0.29 node01 kube-system descheduler-cronjob-1595395800-8jg5d 0/1 ContainerCreating 0 4s node01 kube-system etcd-controller 1/1 Running 0 21h 134.119.213.95 controller kube-system kube-apiserver-controller 1/1 Running 0 21h 134.119.213.95 controller kube-system kube-controller-manager-controller 1/1 Running 0 21h 134.119.213.95 controller kube-system kube-ovn-cni-h5p5m 1/1 Running 5 20h 134.119.213.95 controller kube-system kube-ovn-cni-xjdzl 1/1 Running 0 19h 146.0.237.30 node01 kube-system kube-ovn-controller-96f89c68b-pp75k 1/1 Running 0 20h 134.119.213.95 controller kube-system kube-ovn-controller-96f89c68b-zzks7 1/1 Running 0 19h 146.0.237.30 node01 kube-system kube-proxy-tlgbm 1/1 Running 0 19h 146.0.237.30 node01 kube-system kube-proxy-w2zqp 1/1 Running 0 21h 134.119.213.95 controller kube-system kube-scheduler-controller 1/1 Running 0 20h 134.119.213.95 controller kube-system ovn-central-74986486f9-fvq5z 1/1 Running 0 20h 134.119.213.95 controller kube-system ovs-ovn-2mm96 1/1 Running 10 20h 134.119.213.95 controller kube-system ovs-ovn-hpmdd 1/1 Running 0 19h 146.0.237.30 node01 kubevirt virt-api-f94f8b959-6vr6m 1/1 Running 0 19h 10.16.0.28 node01 kubevirt virt-api-f94f8b959-z2j5d 1/1 Running 0 19h 10.16.0.27 node01 kubevirt virt-controller-64766f7cbf-58xmw 1/1 Running 0 19h 10.16.0.30 node01 kubevirt virt-controller-64766f7cbf-c8sfn 1/1 Running 0 19h 10.16.0.31 node01 kubevirt virt-handler-qr7qn 1/1 Running 0 19h 10.16.0.32 node01 kubevirt virt-operator-79c97797-8v7sj 1/1 Running 0 20h 10.16.0.7 node01 kubevirt virt-operator-79c97797-zwnfv 1/1 Running 0 20h 10.16.0.6 node01 openness docker-registry-deployment-54d5bb5c-672z2 1/1 Running 0 20h 134.119.213.95 controller openness eaa-6f8b94c9d7-kxjlm 1/1 Running 0 20h 10.16.0.4 node01 openness edgedns-ll22s 1/1 Running 0 19h 10.16.0.21 node01 openness interfaceservice-xdbsz 1/1 Running 0 19h 10.16.0.19 node01 openness nfd-release-node-feature-discovery-master-cdbcfd997-lrppv 1/1 Running 0 20h 10.16.0.15 controller openness nfd-release-node-feature-discovery-worker-5l92k 1/1 Running 0 19h 146.0.237.30 node01 openness syslog-master-dxct9 1/1 Running 0 20h 10.16.0.5 controller openness syslog-ng-9svpj 1/1 Running 0 19h 10.16.0.22 node01 telemetry cadvisor-cx4z6 2/2 Running 0 19h 10.16.0.20 node01 telemetry collectd-nkj8x 2/2 Running 0 19h 146.0.237.30 node01 telemetry custom-metrics-apiserver-54699b845f-dbsws 1/1 Running 0 20h 10.16.0.13 controller telemetry grafana-6b79c984b-88mpv 2/2 Running 0 20h 10.16.0.17 controller telemetry otel-collector-7d5b75bbdf-6jkxb 2/2 Running 0 20h 10.16.0.11 node01 telemetry prometheus-node-exporter-92q8m 1/1 Running 0 19h 10.16.0.23 node01 telemetry prometheus-server-76c96b9497-xkhg6 3/3 Running 0 20h 10.16.0.10 controller telemetry telemetry-aware-scheduling-68467c4ccd-bxltp 2/2 Running 0 20h 10.16.0.14 controller telemetry telemetry-collector-certs-vcrqk 0/1 Completed 0 20h 10.16.0.12 node01 telemetry telemetry-node-certs-5xb8j 1/1 Running 0 19h 10.16.0.18 node01

tomaszwesolowski commented 4 years ago

Hi, Are you sure consumer and producer are present on your node machine? to verify you can use this commands. To build those apps you can use this guide. Its important that the images are built on node machine (since policy is never pull it will look for images present on the machine)

pavanats commented 4 years ago

We could deploy the sampleApp. Thank you for your help. Pavan


From: tomaszwesolowski notifications@github.com Sent: Wednesday, July 22, 2020 2:59 PM To: open-ness/openness-experience-kits openness-experience-kits@noreply.github.com Cc: Pavan Gupta pavan.gupta@atsgen.com; Author author@noreply.github.com Subject: Re: [open-ness/openness-experience-kits] ErrImageNeverPull while deploying sample app (#38)

Hi, Are you sure consumer and producer are present on your node machine? to verify you can use this commands.https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#verifying-image-availability To build those apps you can use this guidehttps://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#building-sample-application-images. Its important that the images are built on node machine (since policy is never pull it will look for images present on the machine)

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/open-ness/openness-experience-kits/issues/38#issuecomment-662348300, or unsubscribehttps://github.com/notifications/unsubscribe-auth/APSLZCYZYAE75GXF5EQQIV3R42WN7ANCNFSM4PEK2CXQ.