oomichi / try-kubernetes

11 stars 5 forks source link

[Fail] [sig-apps] Daemon set [Serial] [It] should rollback without unnecessary restarts [Conformance] #101

Closed oomichi closed 2 years ago

oomichi commented 4 years ago
Summarizing 1 Failure:

[Fail] [sig-apps] Daemon set [Serial] [It] should rollback without unnecessary restarts [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:387

Ran 279 of 4983 Specs in 7203.131 seconds
FAIL! -- 278 Passed | 1 Failed | 0 Pending | 4704 Skipped
--- FAIL: TestE2E (7203.14s)
FAIL

Ginkgo ran 1 suite in 2h0m4.487318717s
Test Suite Failed
!!! Error in ./hack/ginkgo-e2e.sh:146
  Error in ./hack/ginkgo-e2e.sh:146. '"${ginkgo}" "${ginkgo_args[@]:+${ginkgo_args[@]}}" "${e2e_test}" -- "${auth_config[@]:+${auth_config[@]}}" --ginkgo.flakeAttempts="${FLAKE_ATTEMPTS}" --host="${KUBE_MASTER_URL}" --provider="${KUBERNETES_PROVIDER}" --gce-project="${PROJECT:-}" --gce-zone="${ZONE:-}" --gce-region="${REGION:-}" --gce-multizone="${MULTIZONE:-false}" --gke-cluster="${CLUSTER_NAME:-}" --kube-master="${KUBE_MASTER:-}" --cluster-tag="${CLUSTER_ID:-}" --cloud-config-file="${CLOUD_CONFIG:-}" --repo-root="${KUBE_ROOT}" --node-instance-group="${NODE_INSTANCE_GROUP:-}" --prefix="${KUBE_GCE_INSTANCE_PREFIX:-e2e}" --network="${KUBE_GCE_NETWORK:-${KUBE_GKE_NETWORK:-e2e}}" --node-tag="${NODE_TAG:-}" --master-tag="${MASTER_TAG:-}" --cluster-monitoring-mode="${KUBE_ENABLE_CLUSTER_MONITORING:-standalone}" --prometheus-monitoring="${KUBE_ENABLE_PROMETHEUS_MONITORING:-false}" --dns-domain="${KUBE_DNS_DOMAIN:-cluster.local}" --ginkgo.slowSpecThreshold="${GINKGO_SLOW_SPEC_THRESHOLD:-300}" ${KUBE_CONTAINER_RUNTIME:+"--container-runtime=${KUBE_CONTAINER_RUNTIME}"} ${MASTER_OS_DISTRIBUTION:+"--master-os-distro=${MASTER_OS_DISTRIBUTION}"} ${NODE_OS_DISTRIBUTION:+"--node-os-distro=${NODE_OS_DISTRIBUTION}"} ${NUM_NODES:+"--num-nodes=${NUM_NODES}"} ${E2E_REPORT_DIR:+"--report-dir=${E2E_REPORT_DIR}"} ${E2E_REPORT_PREFIX:+"--report-prefix=${E2E_REPORT_PREFIX}"} "${@:-}"' exited with status 1
Call stack:
  1: ./hack/ginkgo-e2e.sh:146 main(...)
Exiting with status 1
2019/09/27 23:16:29 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=\[Conformance\]' finished in 2h0m4.815373093s
2019/09/27 23:16:29 main.go:319: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Conformance\]: exit status 1]
oomichi commented 4 years ago
Logging pods the kubelet thinks is on node k8s-cpu01
Oct  1 02:36:03.600: INFO: kube-proxy-ddq5b started at 2019-09-26 21:39:01 +0000 UTC (0+1 container statuses recorded)
Oct  1 02:36:03.600: INFO:  Container kube-proxy ready: true, restart count 0
Oct  1 02:36:03.600: INFO: kube-flannel-ds-amd64-hvrhs started at 2019-09-27 22:58:25 +0000 UTC (1+1 container statuses recorded)
Oct  1 02:36:03.600: INFO:  Init container install-cni ready: true, restart count 0
Oct  1 02:36:03.600: INFO:  Container kube-flannel ready: true, restart count 0
Oct  1 02:36:03.684: INFO: 
Latency metrics for node k8s-cpu01
Oct  1 02:36:03.684: INFO: 
Logging node info for node k8s-master
Oct  1 02:36:03.689: INFO: Node Info: &Node{ObjectMeta:{k8s-master   /api/v1/nodes/k8s-master d9678fd4-3ab1-4091-a1c7-43a6e5be78c8 498291 0 2019-09-26 21:06:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"22:e4:52:40:07:ff"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.185 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41442029568 0} {<nil>} 40470732Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4136525824 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37297826550 0} {<nil>} 37297826550 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4031668224 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-10-01 02:35:59 +0000 UTC,LastTransitionTime:2019-09-26 21:06:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-10-01 02:35:59 +0000 UTC,LastTransitionTime:2019-09-26 21:06:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-10-01 02:35:59 +0000 UTC,LastTransitionTime:2019-09-26 21:06:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-10-01 02:35:59 +0000 UTC,LastTransitionTime:2019-09-26 21:17:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.185,},NodeAddress{Type:Hostname,Address:k8s-master,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23261bf976bd4f8a96ba85ac07e7d0df,SystemUUID:23261BF9-76BD-4F8A-96BA-85AC07E7D0DF,BootID:23d1c9a5-e46c-454c-8ca0-00e5701a4bb1,KernelVersion:4.15.0-64-generic,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.16.0,KubeProxyVersion:v1.16.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15-0],SizeBytes:246640776,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:f4168527c91289da2708f62ae729fdde5fb484167dd05ffbb7ab666f60de96cd k8s.gcr.io/kube-apiserver:v1.16.0],SizeBytes:217066846,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:c156a05ee9d40e3ca2ebf9337f38a10558c1fc6c9124006f128a82e6c38cdf3e k8s.gcr.io/kube-controller-manager:v1.16.0],SizeBytes:163310046,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0 k8s.gcr.io/kube-scheduler:v1.16.0],SizeBytes:87265822,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:e7f0f8e320cfeeaafdc9c0cb8e23f51e542fa1d955ae39c8131a0531ba72c794 k8s.gcr.io/kube-proxy:v1.16.0],SizeBytes:86056924,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:7806805c93b20a168d0bbbd25c6a213f00ac58a511c47e8fa6409543528a204e quay.io/coreos/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct  1 02:36:03.690: INFO: 
Logging kubelet events for node k8s-master
Oct  1 02:36:03.694: INFO: 
Logging pods the kubelet thinks is on node k8s-master
Oct  1 02:36:03.718: INFO: coredns-5644d7b6d9-hr8q5 started at 2019-09-26 21:17:57 +0000 UTC (0+1 container statuses recorded)
Oct  1 02:36:03.718: INFO:  Container coredns ready: true, restart count 0
Oct  1 02:36:03.718: INFO: kube-controller-manager-k8s-master started at 2019-09-26 21:06:12 +0000 UTC (0+1 container statuses recorded)
Oct  1 02:36:03.718: INFO:  Container kube-controller-manager ready: true, restart count 1
Oct  1 02:36:03.718: INFO: kube-scheduler-k8s-master started at 2019-09-26 21:06:12 +0000 UTC (0+1 container statuses recorded)
Oct  1 02:36:03.718: INFO:  Container kube-scheduler ready: true, restart count 0
Oct  1 02:36:03.718: INFO: etcd-k8s-master started at 2019-09-26 21:06:12 +0000 UTC (0+1 container statuses recorded)
Oct  1 02:36:03.718: INFO:  Container etcd ready: true, restart count 0
Oct  1 02:36:03.718: INFO: kube-apiserver-k8s-master started at 2019-09-26 21:06:12 +0000 UTC (0+1 container statuses recorded)
Oct  1 02:36:03.718: INFO:  Container kube-apiserver ready: true, restart count 0
Oct  1 02:36:03.718: INFO: kube-proxy-j66x2 started at 2019-09-26 21:06:38 +0000 UTC (0+1 container statuses recorded)
Oct  1 02:36:03.718: INFO:  Container kube-proxy ready: true, restart count 0
Oct  1 02:36:03.718: INFO: kube-flannel-ds-amd64-7lcs2 started at 2019-09-26 21:17:37 +0000 UTC (1+1 container statuses recorded)
Oct  1 02:36:03.718: INFO:  Init container install-cni ready: true, restart count 0
Oct  1 02:36:03.718: INFO:  Container kube-flannel ready: true, restart count 0
Oct  1 02:36:03.718: INFO: coredns-5644d7b6d9-krdhq started at 2019-09-26 21:17:57 +0000 UTC (0+1 container statuses recorded)
Oct  1 02:36:03.718: INFO:  Container coredns ready: true, restart count 0
Oct  1 02:36:03.765: INFO: 
Latency metrics for node k8s-master
Oct  1 02:36:03.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9497" for this suite.
Oct  1 02:36:09.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct  1 02:36:09.980: INFO: namespace daemonsets-9497 deletion completed in 6.210538296s

• Failure [6.693 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance] [It]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697

  Oct  1 02:36:03.510: Conformance test suite needs a cluster with at least 2 nodes.
  Expected
      <int>: 1
  to be >
      <int>: 1

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:387
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct  1 02:36:09.982: INFO: Running AfterSuite actions on all nodes
Oct  1 02:36:09.982: INFO: Running AfterSuite actions on node 1

Summarizing 1 Failure:

[Fail] [sig-apps] Daemon set [Serial] [It] should rollback without unnecessary restarts [Conformance] 
oomichi commented 4 years ago

Conformance test suite needs a cluster with at least 2 nodes. 2ノード以上ないとダメなのね・・

oomichi commented 4 years ago

node を追加

$ kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-cpu01    Ready    <none>   4d19h   v1.16.0
k8s-cpu02    Ready    <none>   22s     v1.16.0
k8s-master   Ready    master   4d20h   v1.16.0