smart-edge-open / converged-edge-experience-kits

Source code for experience kits with Ansible-based deployment.
Apache License 2.0
37 stars 40 forks source link

while deploying openness 20.06 ovs-ovn/ovn-central pods are not running #54

Closed Jaladi-Devika closed 3 years ago

Jaladi-Devika commented 4 years ago

Hi,

Can anyone please help me how to solve below issue? ovs-ovn/ovn-central pods are not running

group_vars/all/10-default.yml image


TASK [kubernetes/cni/kubeovn/master : wait for running ovs-ovn & ovn-central pods] *** task path: /home/sysadmin/Downloads/openness-experience-kits-master/roles/kubernetes/cni/kubeovn/master/tasks/main.yml:149 FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (30 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (29 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (28 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (27 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (26 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (25 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (24 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (23 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (22 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (21 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (20 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (19 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (18 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (17 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (16 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (15 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (14 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (13 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (12 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (11 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (10 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (9 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (8 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (7 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (6 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (5 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (4 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (3 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (2 retries left). FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (1 retries left). fatal: [controller]: FAILED! => { "attempts": 30, "changed": false, "cmd": "set -o pipefail && kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name,STATUS:.status.phase --no-headers --field-selector spec.nodeName=controller | grep -E \"ovs-ovn|ovn-central\"\n", "delta": "0:00:00.071730", "end": "2020-09-07 17:51:59.310148", "rc": 0, "start": "2020-09-07 17:51:59.238418" }

STDOUT:

ovn-central-74986486f9-5vc4t Pending ovs-ovn-h7r99 Running

TASK [kubernetes/cni/kubeovn/master : events of ovs-ovn & ovn-central pods] ** task path: /home/sysadmin/Downloads/openness-experience-kits-master/roles/kubernetes/cni/kubeovn/master/tasks/main.yml:169 ok: [controller] => (item=ovs-ovn) => { "ansible_loop_var": "item", "changed": false, "cmd": "set -o pipefail && kubectl describe pod -n kube-system $(kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name | grep ovs-ovn) | sed -n '/^Events:/,//p'\n", "delta": "0:00:00.163775", "end": "2020-09-07 17:51:59.641209", "item": "ovs-ovn", "rc": 0, "start": "2020-09-07 17:51:59.477434" }

STDOUT:

Events: Type Reason Age From Message


Normal Scheduled 88m default-scheduler Successfully assigned kube-system/ovs-ovn-h7r99 to controller Normal Pulled 88m kubelet, controller Container image "ovs-dpdk" already present on machine Normal Created 88m kubelet, controller Created container openvswitch Normal Started 88m kubelet, controller Started container openvswitch Warning Unhealthy 87m (x4 over 87m) kubelet, controller Liveness probe failed: ovsdb-server is not running ovs-vswitchd is not running Normal Killing 87m kubelet, controller Container openvswitch failed liveness probe, will be restarted Warning Unhealthy 53m (x204 over 88m) kubelet, controller Readiness probe failed: ovsdb-server is not running ovs-vswitchd is not running Warning BackOff 48m (x99 over 78m) kubelet, controller Back-off restarting failed container Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage Output: Running scope as unit run-16622.scope. mount: wrong fs type, bad option, bad superblock on nodev, missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage Output: Running scope as unit run-16674.scope. mount: wrong fs type, bad option, bad superblock on nodev, missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage Output: Running scope as unit run-16764.scope. mount: wrong fs type, bad option, bad superblock on nodev, missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage Output: Running scope as unit run-16905.scope. mount: wrong fs type, bad option, bad superblock on nodev, missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage Output: Running scope as unit run-16983.scope. mount: wrong fs type, bad option, bad superblock on nodev, missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage Output: Running scope as unit run-17107.scope. mount: wrong fs type, bad option, bad superblock on nodev, missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage Output: Running scope as unit run-17345.scope. mount: wrong fs type, bad option, bad superblock on nodev, missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 44m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.io~empty-dir/hugepage Output: Running scope as unit run-17745.scope. mount: wrong fs type, bad option, bad superblock on nodev, missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 43m kubelet, controller Unable to attach or mount volumes: unmounted volumes=[hugepage], unattached volumes=[dev ovn-token-jf69d host-modules host-run-ovs host-sys host-config-openvswitch host-log hugepage]: timed out waiting for the condition Warning FailedMount 29m (x15 over 43m) kubelet, controller (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[hugepage], unattached volumes=[hugepage dev ovn-token-jf69d host-modules host-run-ovs host-sys host-config-openvswitch host-log]: timed out waiting for the condition Normal SandboxChanged 25m kubelet, controller Pod sandbox changed, it will be killed and re-created. Normal Pulled 25m (x2 over 25m) kubelet, controller Container image "ovs-dpdk" already present on machine Normal Created 25m (x2 over 25m) kubelet, controller Created container openvswitch Normal Started 25m (x2 over 25m) kubelet, controller Started container openvswitch Warning Unhealthy 24m kubelet, controller Liveness probe failed: ovsdb-server is not running ovs-vswitchd is not running Warning BackOff 5m16s (x54 over 25m) kubelet, controller Back-off restarting failed container Warning Unhealthy 22s (x138 over 25m) kubelet, controller Readiness probe failed: ovsdb-server is not running ovs-vswitchd is not running ok: [controller] => (item=ovn-central) => { "ansible_loop_var": "item", "changed": false, "cmd": "set -o pipefail && kubectl describe pod -n kube-system $(kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name | grep ovn-central) | sed -n '/^Events:/,//p'\n", "delta": "0:00:00.150694", "end": "2020-09-07 17:51:59.921285", "item": "ovn-central", "rc": 0, "start": "2020-09-07 17:51:59.770591" }

STDOUT:

Events: Type Reason Age From Message


Normal Scheduled 88m default-scheduler Successfully assigned kube-system/ovn-central-74986486f9-5vc4t to controller Warning Failed 88m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T105335Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=611876d46e3fa7ed410a9d112f63397cec758c86c1655416f6eeb6c8770dfada: net/http: TLS handshake timeout Warning Failed 87m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T105425Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=e852a763ff8ac84433f858fd7aecfbf83a3aed47a59468641ad7449eb7567c74: net/http: TLS handshake timeout Warning Failed 79m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = net/http: TLS handshake timeout Warning Failed 78m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T110304Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=a5c6c1f58e35d61854aed19af2e1820b14b8ba5a7ffb33fc59c8c439696ccc42: net/http: TLS handshake timeout Normal BackOff 68m (x43 over 88m) kubelet, controller Back-off pulling image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0" Warning Failed 63m (x51 over 88m) kubelet, controller Error: ImagePullBackOff Warning Failed 53m (x8 over 88m) kubelet, controller Error: ErrImagePull Normal Pulling 53m (x9 over 88m) kubelet, controller Pulling image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0" Warning Failed 44m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T113704Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=916691b8ea24e00bf8666c550cc3d157b6d2ba694890f5a17782916ce9679916: net/http: TLS handshake timeout Warning Failed 39m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = net/http: TLS handshake timeout Warning Failed 39m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T114242Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=22320aef375ff472ef4110b3e4887173db5c03fcea34ff253c3f6b1c6f49b2d2: net/http: TLS handshake timeout Normal Pulling 38m (x4 over 45m) kubelet, controller Pulling image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0" Warning Failed 35m (x4 over 44m) kubelet, controller Error: ErrImagePull Warning Failed 35m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T114551Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=4cfef79d86a0b997447a4a66221618abd1970b6bfa553ac2460fe81d6a0fa6de: net/http: TLS handshake timeout Warning Failed 35m (x7 over 44m) kubelet, controller Error: ImagePullBackOff Normal BackOff 35m (x8 over 44m) kubelet, controller Back-off pulling image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0" Warning Failed 29m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T115241Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=9e128325134b83958b15251e0dd072492dc0bf0d22e92455b6d6dd739fd74248: net/http: TLS handshake timeout Warning Failed 25m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T115644Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=81deb3cb59d53ab95bd7175ad7b2139a6540ec3f804ed61f944bf88de2fdf726: net/http: TLS handshake timeout Warning Failed 24m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T115734Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=e98c507daea0f7532ef1fbd9eb321b1157f8b3d58bc2a0cfe1154255d7784225: net/http: TLS handshake timeout Warning Failed 23m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T115826Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=8ee7b7654dcd3033deecd399056cf83584cc8a12d9189ac6ecb6a8f2caa0aba0: net/http: TLS handshake timeout Normal Pulling 22m (x4 over 25m) kubelet, controller Pulling image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0" Warning Failed 22m (x4 over 25m) kubelet, controller Error: ErrImagePull Warning Failed 22m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T115935Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=39c7aca8cc00c3f36f9f2389fa315dc14eba9e26813f771025c40a5c49c5f8b4: net/http: TLS handshake timeout Normal BackOff 5m15s (x74 over 25m) kubelet, controller Back-off pulling image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0" Warning Failed 10s (x95 over 25m) kubelet, controller Error: ImagePullBackOff

TASK [kubernetes/cni/kubeovn/master : try to get ovs-ovn execution logs] ***** task path: /home/sysadmin/Downloads/openness-experience-kits-master/roles/kubernetes/cni/kubeovn/master/tasks/main.yml:179 ok: [controller] => (item=ovs-ovn) => { "ansible_loop_var": "item", "changed": false, "cmd": "set -o pipefail && kubectl logs -n kube-system $(kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name | grep ovs-ovn)\n", "delta": "0:00:00.145724", "end": "2020-09-07 17:52:00.233999", "item": "ovs-ovn", "rc": 0, "start": "2020-09-07 17:52:00.088275" }

STDOUT:

sleep 10 seconds, waiting for ovn-sb 10.102.126.171:6642 ready sleep 10 seconds, waiting for ovn-sb 10.102.126.171:6642 ready sleep 10 seconds, waiting for ovn-sb 10.102.126.171:6642 ready sleep 10 seconds, waiting for ovn-sb 10.102.126.171:6642 ready failed: [controller] (item=ovn-central) => { "ansible_loop_var": "item", "changed": false, "cmd": "set -o pipefail && kubectl logs -n kube-system $(kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name | grep ovn-central)\n", "delta": "0:00:00.133634", "end": "2020-09-07 17:52:00.494118", "item": "ovn-central", "rc": 1, "start": "2020-09-07 17:52:00.360484" }

STDERR:

Error from server (BadRequest): container "ovn-central" in pod "ovn-central-74986486f9-5vc4t" is waiting to start: trying and failing to pull image

MSG:

non-zero return code ...ignoring

TASK [kubernetes/cni/kubeovn/master : end the playbook] ** task path: /home/sysadmin/Downloads/openness-experience-kits-master/roles/kubernetes/cni/kubeovn/master/tasks/main.yml:188 fatal: [controller]: FAILED! => { "changed": false }

MSG:

end the playbook: either ovs-ovn or ovn-central pod did not start or the socket was not created

PLAY RECAP *** controller : ok=212 changed=71 unreachable=0 failed=1 skipped=107 rescued=1 ignored=5

[root@controller openness-experience-kits-master]#


Thanks & Regards, Devika

MariuszSzczepanik commented 4 years ago

Hi Devika,

Today my team merged to our private repository a fix related with ovs/ovn module. commit title: "Kustomize ovn-central"

Currently we are in the meddle of testing phase, I assume this commits should be visible in open-ness in the near future.

Best regards, Mariusz Szczepanik

tomaszwesolowski commented 3 years ago

Please create new issue if this error occurs again.