oomichi / try-kubernetes

12 stars 5 forks source link

2 test failures of [sig-node] regular resource usage tracking #43

Closed oomichi closed 5 years ago

oomichi commented 6 years ago
[Fail] [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking [It] resource tracking for 0 pods per node
[Fail] [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking [It] resource tracking for 100 pods per node

まとめ

oomichi commented 6 years ago

テスト概要を把握する。

test/e2e/node/kubelet_perf.go

224                 // We assume that the scheduler will make reasonable scheduling choices
225                 // and assign ~N pods on the node.
226                 // Although we want to track N pods per node, there are N + add-on pods
227                 // in the cluster. The cluster add-on pods can be distributed unevenly
228                 // among the nodes because they are created during the cluster
229                 // initialization. This *noise* is obvious when N is small. We
230                 // deliberately set higher resource usage limits to account for the
231                 // noise.
232                 //
233                 // We set all resource limits generously because this test is mainly
234                 // used to catch resource leaks in the soak cluster. For tracking
235                 // kubelet/runtime resource usage, please see the node e2e benchmark
236                 // dashboard. http://node-perf-dash.k8s.io/

我々は scheduler に筋の通ったスケジューリングを期待し、(たとえば)Node あたり N 個以下のPod 配置を期待する。 しかし、Node あたり N個以下のPod 配置を期待しても、そこには N + add-on の Pod が存在する。 add-on Pod は不均等に Node に分散される。なぜならば Cluster の初期化段階で add-on Pod が作られるからだ。 このノイズは N の数が小さいときに顕著になる。 このノイズのため、我々はわざと高いリソース利用制限をアカウントにセットすることになる。

(このテストでは)全てのリソース制限を寛大にかける。なぜならばこのテストは基本的に soak クラスタにおけるリソース漏れを検知するために使われるからだ。kubelet/runtime リソース利用を トラックするため、node e2e benchmark dashboard を見てほしい。

テストケースは2つ

runResourceTrackingTestの概要

oomichi commented 6 years ago

ログ

[AfterEach] [k8s.io] [sig-node] Kubelet [Serial] [Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:218
Aug 17 18:04:38.354: INFO: runtime operation error metrics:
node "k8s-master" runtime operation error rate:
operation "inspect_container": total - 0; error rate - NaN; timeout rate - NaN
operation "list_containers": total - 4394; error rate - 0.000000; timeout rate - 0.000000
operation "list_images": total - 309; error rate - 0.000000; timeout rate - 0.000000
operation "start_container": total - 0; error rate - NaN; timeout rate - NaN
operation "version": total - 361; error rate - 0.000000; timeout rate - 0.000000
operation "create_container": total - 0; error rate - NaN; timeout rate - NaN
operation "pull_image": total - 0; error rate - NaN; timeout rate - NaN
operation "remove_container": total - 0; error rate - NaN; timeout rate - NaN
operation "stop_container": total - 0; error rate - NaN; timeout rate - NaN
operation "info": total - 0; error rate - NaN; timeout rate - NaN
operation "inspect_image": total - 4; error rate - 0.000000; timeout rate - 0.000000

node "k8s-node01" runtime operation error rate:
operation "info": total - 0; error rate - NaN; timeout rate - NaN
operation "create_container": total - 0; error rate - NaN; timeout rate - NaN
operation "list_containers": total - 4422; error rate - 0.000000; timeout rate - 0.000000
operation "remove_container": total - 0; error rate - NaN; timeout rate - NaN
operation "stop_container": total - 0; error rate - NaN; timeout rate - NaN
operation "version": total - 361; error rate - 0.000000; timeout rate - 0.000000
operation "inspect_container": total - 0; error rate - NaN; timeout rate - NaN
operation "pull_image": total - 0; error rate - NaN; timeout rate - NaN
operation "list_images": total - 307; error rate - 0.000000; timeout rate - 0.000000
operation "inspect_image": total - 4; error rate - 0.000000; timeout rate - 0.000000
operation "start_container": total - 0; error rate - NaN; timeout rate - NaN

~ Failure [1206.981 seconds]
[k8s.io] [sig-node] Kubelet [Serial] [Slow]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:679
  [k8s.io] [sig-node] regular resource usage tracking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:679
    resource tracking for 0 pods per node [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:270

    Aug 17 18:04:31.796: CPU usage exceeding limits:
     node k8s-master:
     container "kubelet": expected 50th% usage < 0.100; got 0.112
    node k8s-node01:
     container "kubelet": expected 50th% usage < 0.100; got 0.170

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:190
oomichi commented 6 years ago

expected 50th% の th って何? -> これはパーセンタイルという統計学上の表記方法で typo ではなく正しい。

percentile.
データを小さい順に並べたとき、初めから数えて全体の %に位置する値を パーセンタイルと言う。
65パーセンタイルであれば、最小値から数えて65%に位置する値を指す。 
第一四分位数は25パーセンタイル、中央値は50パーセンタイル、第三四分位数は75%パーセンタイルである。

test/e2e/node/kubelet_perf.go

179                                 if actualValue > expectedValue {
180                                         nodeErrs = append(nodeErrs, fmt.Sprintf("container %q: expected %.0fth%% usage < %.3f; got %.3f",
181                                                 cName, p*100, expectedValue, actualValue))
182                                 }
oomichi commented 6 years ago

1つのテスト失敗で20分かかっている。解消による効果は大きそう。

$ go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.focus=resource\stracking\sfor\s0\spods\sper\snode" --check-version-skew=false
2018/08/17 18:43:53 e2e.go:79: Calling kubetest --verbose-commands=true --provider=skeleton --test --test_args=--ginkgo.focus=resource\stracking\sfor\s0\spods\sper\snode --check-version-skew=false...
2018/08/17 18:43:53 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.1-2+9cefc5e2ae224a", GitCommit:"9cefc5e2ae224a2c2f9977b138c31e7d31fe89a2", GitTreeState:"clean", BuildDate:"2018-08-17T00:44:07Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
2018/08/17 18:43:53 process.go:155: Step './hack/e2e-internal/e2e-status.sh' finished in 150.168582ms
2018/08/17 18:43:53 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
2018/08/17 18:43:53 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 158.314664ms
2018/08/17 18:43:53 process.go:153: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=resource\stracking\sfor\s0\spods\sper\snode
Setting up for KUBERNETES_PROVIDER="skeleton".
Skeleton Provider: prepare-e2e not implemented
KUBE_MASTER_IP: 192.168.1.108:6443
KUBE_MASTER: k8s-master
Aug 17 18:43:53.684: INFO: Overriding default scale value of zero to 1
Aug 17 18:43:53.684: INFO: Overriding default milliseconds value of zero to 5000
I0817 18:43:53.769920   13142 e2e.go:333] Starting e2e run "7d7eefad-a24d-11e8-9727-fa163e738a69" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1534531433 - Will randomize all specs
Will run 1 of 999 specs

Aug 17 18:43:53.799: INFO: >>> kubeConfig: /home/ubuntu/admin.conf
Aug 17 18:43:53.801: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Aug 17 18:43:53.836: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Aug 17 18:43:53.856: INFO: 11 / 11 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Aug 17 18:43:53.856: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Aug 17 18:43:53.867: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Aug 17 18:43:53.867: INFO: Dumping network health container logs from all nodes...
Aug 17 18:43:53.874: INFO: e2e test version: v1.11.1-2+9cefc5e2ae224a
Aug 17 18:43:53.875: INFO: kube-apiserver version: v1.11.1
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking
  resource tracking for 0 pods per node
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:270
[BeforeEach] [k8s.io] [sig-node] Kubelet [Serial] [Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 17 18:43:53.879: INFO: >>> kubeConfig: /home/ubuntu/admin.conf
STEP: Building a namespace api object
Aug 17 18:43:53.950: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] Kubelet [Serial] [Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:201
Aug 17 18:43:53.956: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
[It] resource tracking for 0 pods per node
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:270
STEP: Creating a RC of 0 pods and wait until all pods of this RC are running
STEP: creating replication controller resource0-7dc319a3-a24d-11e8-9727-fa163e738a69 in namespace e2e-tests-kubelet-perf-vvpbc
I0817 18:43:54.098657   13142 runners.go:177] Created replication controller with name: resource0-7dc319a3-a24d-11e8-9727-fa163e738a69, namespace: e2e-tests-kubelet-perf-vvpbc, replica count: 0
Aug 17 18:43:54.148: INFO: [Resource usage on node "k8s-master" is not ready yet, Resource usage on node "k8s-node01" is not ready yet]
Aug 17 18:43:54.148: INFO:
STEP: Start monitoring resource usage
Aug 17 18:43:54.148: INFO: Still running...19m59.999999515s left
Aug 17 18:48:54.171: INFO: 10 pods are running on node k8s-master
Aug 17 18:48:54.189: INFO: 2 pods are running on node k8s-node01
Aug 17 18:48:54.189: INFO: Still running...14m59.959733539s left
Aug 17 18:53:54.217: INFO: 10 pods are running on node k8s-master
Aug 17 18:53:54.236: INFO: 2 pods are running on node k8s-node01
Aug 17 18:53:54.236: INFO: Still running...9m59.912707339s left
Aug 17 18:58:54.256: INFO: 10 pods are running on node k8s-master
Aug 17 18:58:54.293: INFO: 2 pods are running on node k8s-node01
Aug 17 18:58:54.293: INFO: Still running...4m59.855164896s left
Aug 17 19:03:54.186: INFO: 10 pods are running on node k8s-master
Aug 17 19:03:54.197: INFO: 2 pods are running on node k8s-node01
STEP: Reporting overall resource usage
Aug 17 19:03:54.204: INFO: 10 pods are running on node k8s-master
Aug 17 19:03:54.215: INFO: 2 pods are running on node k8s-node01
Aug 17 19:03:54.215: INFO: Resource usage on node "k8s-node01":
container cpu(cores) memory_working_set(MB) memory_rss(MB)
"/"       0.205      1046.82                279.34
"runtime" 0.018      265.88                 80.56
"kubelet" 0.166      162.73                 147.81

Resource usage on node "k8s-master":
container cpu(cores) memory_working_set(MB) memory_rss(MB)
"/"       0.422      2236.28                1354.32
"runtime" 0.031      184.12                 91.25
"kubelet" 0.101      116.20                 88.37

Aug 17 19:03:54.216: INFO: [Result:Performance] {
  "version": "v2",
  "dataItems": [
    {
      "data": {
        "memory": 222.98828125,
        "rss": 147.80859375,
        "workingset": 162.73046875
      },
      "unit": "MB",
      "labels": {
        "container": "kubelet",
        "datatype": "resource",
        "node": "node",
        "resource": "memory"
      }
    },
    {
      "data": {
        "memory": 1590.859375,
        "rss": 279.3359375,
        "workingset": 1046.81640625
      },
      "unit": "MB",
      "labels": {
        "container": "/",
        "datatype": "resource",
        "node": "node",
        "resource": "memory"
      }
    },
    {
      "data": {
        "memory": 465.34765625,
        "rss": 80.55859375,
        "workingset": 265.8828125
      },
      "unit": "MB",
      "labels": {
        "container": "runtime",
        "datatype": "resource",
        "node": "node",
        "resource": "memory"
      }
    },
    {
      "data": {
        "memory": 2685.63671875,
        "rss": 1354.3203125,
        "workingset": 2236.28125
      },
      "unit": "MB",
      "labels": {
        "container": "/",
        "datatype": "resource",
        "node": "master",
        "resource": "memory"
      }
    },
    {
      "data": {
        "memory": 218.0078125,
        "rss": 91.25390625,
        "workingset": 184.1171875
      },
      "unit": "MB",
      "labels": {
        "container": "runtime",
        "datatype": "resource",
        "node": "master",
        "resource": "memory"
      }
    },
    {
      "data": {
        "memory": 176.71484375,
        "rss": 88.3671875,
        "workingset": 116.1953125
      },
      "unit": "MB",
      "labels": {
        "container": "kubelet",
        "datatype": "resource",
        "node": "master",
        "resource": "memory"
      }
    }
  ]
}

[Finish:Performance]
Aug 17 19:03:54.224: INFO: CPU usage of containers on node "k8s-master"
:container 5th%  20th% 50th% 70th% 90th% 95th% 99th%
"/"       0.393 0.427 0.462 0.484 0.514 0.521 0.588
"runtime" 0.021 0.024 0.029 0.032 0.038 0.039 0.047
"kubelet" 0.092 0.106 0.115 0.122 0.134 0.136 0.143

CPU usage of containers on node "k8s-node01"
:container 5th%  20th% 50th% 70th% 90th% 95th% 99th%
"/"       0.186 0.197 0.205 0.214 0.225 0.231 0.244
"runtime" 0.014 0.015 0.019 0.022 0.026 0.029 0.033
"kubelet" 0.144 0.153 0.159 0.165 0.172 0.175 0.184

Aug 17 19:03:54.225: INFO: [Result:Performance] {
  "version": "v2",
  "dataItems": [
    {
      "data": {
        "Perc05": 393.406446,
        "Perc20": 426.7217899,
        "Perc50": 461.5875838,
        "Perc70": 484.0040952,
        "Perc90": 513.8386291,
        "Perc95": 520.7412993,
        "Perc99": 587.8728886
      },
      "unit": "mCPU",
      "labels": {
        "container": "/",
        "datatype": "resource",
        "node": "master",
        "resource": "cpu"
      }
    },
    {
      "data": {
        "Perc05": 21.035006636363637,
        "Perc20": 24.1112935,
        "Perc50": 29.207827499999997,
        "Perc70": 32.300952384615385,
        "Perc90": 37.936381,
        "Perc95": 38.602624375,
        "Perc99": 46.62644984615385
      },
      "unit": "mCPU",
      "labels": {
        "container": "runtime",
        "datatype": "resource",
        "node": "master",
        "resource": "cpu"
      }
    },
    {
      "data": {
        "Perc05": 92.46606930769231,
        "Perc20": 106.3948092,
        "Perc50": 115.37916263636363,
        "Perc70": 122.1405735,
        "Perc90": 133.91659089473683,
        "Perc95": 136.260819,
        "Perc99": 143.336937
      },
      "unit": "mCPU",
      "labels": {
        "container": "kubelet",
        "datatype": "resource",
        "node": "master",
        "resource": "cpu"
      }
    },
    {
      "data": {
        "Perc05": 186.45939818181816,
        "Perc20": 197.4615044,
        "Perc50": 205.18356260000002,
        "Perc70": 213.86097139999998,
        "Perc90": 224.99250365,
        "Perc95": 231.2459532,
        "Perc99": 244.0455376
      },
      "unit": "mCPU",
      "labels": {
        "container": "/",
        "datatype": "resource",
        "node": "node",
        "resource": "cpu"
      }
    },
    {
      "data": {
        "Perc05": 13.941302166666668,
        "Perc20": 15.307673181818181,
        "Perc50": 18.788479,
        "Perc70": 21.597330454545453,
        "Perc90": 25.515456923076922,
        "Perc95": 28.6565353,
        "Perc99": 33.326855307692306
      },
      "unit": "mCPU",
      "labels": {
        "container": "runtime",
        "datatype": "resource",
        "node": "node",
        "resource": "cpu"
      }
    },
    {
      "data": {
        "Perc05": 144.11299991666667,
        "Perc20": 152.86178764285717,
        "Perc50": 159.44052923529412,
        "Perc70": 165.019704,
        "Perc90": 172.12953905,
        "Perc95": 175.0731393,
        "Perc99": 184.31500608333334
      },
      "unit": "mCPU",
      "labels": {
        "container": "kubelet",
        "datatype": "resource",
        "node": "node",
        "resource": "cpu"
      }
    }
  ]
}

[Finish:Performance]
Aug 17 19:03:54.227: INFO: CPU usage exceeding limits:
 node k8s-master:
 container "kubelet": expected 50th%!u(MISSING)sage < 0.100; got 0.115
node k8s-node01:
 container "kubelet": expected 50th%!u(MISSING)sage < 0.100; got 0.159
[AfterEach] [k8s.io] [sig-node] Kubelet [Serial] [Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "e2e-tests-kubelet-perf-vvpbc".
STEP: Found 0 events.
Aug 17 19:03:54.238: INFO: POD                                             NODE        PHASE    GRACE  CONDITIONS
Aug 17 19:03:54.238: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9  k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  }]
Aug 17 19:03:54.238: INFO: coredns-78fcdf6894-xx76v                        k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:19 +0000 UTC  }]
Aug 17 19:03:54.238: INFO: coredns-78fcdf6894-zmpph                        k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 06:49:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 06:49:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 06:49:24 +0000 UTC  }]
Aug 17 19:03:54.238: INFO: etcd-k8s-master                                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  }]
Aug 17 19:03:54.238: INFO: kube-apiserver-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 01:50:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  }]
Aug 17 19:03:54.238: INFO: kube-controller-manager-k8s-master              k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 22:05:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 22:05:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 22:05:28 +0000 UTC  }]
Aug 17 19:03:54.238: INFO: kube-flannel-ds-7df6r                           k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:12:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-17 17:12:31 +0000 UTC  }]
Aug 17 19:03:54.238: INFO: kube-flannel-ds-tllws                           k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:53 +0000 UTC  }]
Aug 17 19:03:54.238: INFO: kube-proxy-hxp7z                                k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:17:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  }]
Aug 17 19:03:54.238: INFO: kube-proxy-zwrl4                                k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  }]
Aug 17 19:03:54.238: INFO: kube-scheduler-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 01:50:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  }]
Aug 17 19:03:54.238: INFO: metrics-server-86bd9d7667-twb2r                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  }]
Aug 17 19:03:54.238: INFO:
Aug 17 19:03:54.241: INFO:
Logging node info for node k8s-master
Aug 17 19:03:54.243: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master,UID:94f19db7-89e3-11e8-b234-fa163e420595,ResourceVersion:3510436,Generation:0,CreationTimestamp:2018-07-17 17:05:18 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: k8s-master,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"06:0e:73:28:c3:b1"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.108,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143394816 0} {<nil>} 4046284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038537216 0} {<nil>} 3943884Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-08-17 19:03:48 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-08-17 19:03:48 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-08-17 19:03:48 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-08-17 19:03:48 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-08-17 19:03:48 +0000 UTC 2018-07-31 23:04:27 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.108} {Hostname k8s-master}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1db2c06c39a54cd3a93a4e0a44823fd6,SystemUUID:1DB2C06C-39A5-4CD3-A93A-4E0A44823FD6,BootID:d2b66fba-cf4e-4205-b596-3ffb4e579c16,KernelVersion:4.4.0-130-generic,OSImage:Ubuntu 16.04.5 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[golang:1.10] 793901893} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[k8s.gcr.io/etcd-amd64:3.2.18] 218904307} {[k8s.gcr.io/kube-apiserver-amd64:v1.11.1] 186675825} {[k8s.gcr.io/kube-apiserver-amd64:v1.11.0] 186617744} {[k8s.gcr.io/kube-controller-manager-amd64:v1.11.1] 155252555} {[k8s.gcr.io/kube-controller-manager-amd64:v1.11.0] 155203118} {[k8s.gcr.io/nginx-slim:0.8] 110487599} {[nginx:latest] 108975101} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-proxy-amd64:v1.11.0] 97772373} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[k8s.gcr.io/kube-scheduler-amd64:v1.11.1] 56781436} {[k8s.gcr.io/kube-scheduler-amd64:v1.11.0] 56757023} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3 gcr.io/kubernetes-e2e-test-images/resource-consumer:1.3] 49707607} {[quay.io/k8scsi/csi-attacher:v0.2.0] 45644524} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[quay.io/k8scsi/csi-provisioner:v0.2.1] 45078229} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[quay.io/k8scsi/driver-registrar:v0.2.0] 42385441} {[k8scloudprovider/cinder-provisioner:latest] 29292916} {[quay.io/k8scsi/hostpathplugin:v0.2.0] 17287699} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/resource-consumer/controller-amd64:1.0] 5902947} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[busybox:latest] 1162769} {[k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Aug 17 19:03:54.243: INFO:
Logging kubelet events for node k8s-master
Aug 17 19:03:54.246: INFO:
Logging pods the kubelet thinks is on node k8s-master
Aug 17 19:03:54.254: INFO: metrics-server-86bd9d7667-twb2r started at 2018-08-03 08:45:39 +0000 UTC (0+1 container statuses recorded)
Aug 17 19:03:54.254: INFO:      Container metrics-server ready: true, restart count 1
Aug 17 19:03:54.254: INFO: kube-controller-manager-k8s-master started at <nil> (0+0 container statuses recorded)
Aug 17 19:03:54.254: INFO: coredns-78fcdf6894-zmpph started at 2018-08-17 06:49:24 +0000 UTC (0+1 container statuses recorded)
Aug 17 19:03:54.254: INFO:      Container coredns ready: true, restart count 0
Aug 17 19:03:54.254: INFO: kube-scheduler-k8s-master started at <nil> (0+0 container statuses recorded)
Aug 17 19:03:54.254: INFO: kube-flannel-ds-7df6r started at 2018-07-17 17:12:31 +0000 UTC (1+1 container statuses recorded)
Aug 17 19:03:54.254: INFO:      Init container install-cni ready: true, restart count 6
Aug 17 19:03:54.254: INFO:      Container kube-flannel ready: true, restart count 6
Aug 17 19:03:54.254: INFO: kube-proxy-zwrl4 started at 2018-07-31 23:08:37 +0000 UTC (0+1 container statuses recorded)
Aug 17 19:03:54.254: INFO:      Container kube-proxy ready: true, restart count 6
Aug 17 19:03:54.254: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9 started at 2018-08-11 12:39:04 +0000 UTC (0+1 container statuses recorded)
Aug 17 19:03:54.254: INFO:      Container standalone-cinder-provisioner ready: true, restart count 0
Aug 17 19:03:54.254: INFO: kube-apiserver-k8s-master started at <nil> (0+0 container statuses recorded)
Aug 17 19:03:54.254: INFO: etcd-k8s-master started at <nil> (0+0 container statuses recorded)
Aug 17 19:03:54.254: INFO: coredns-78fcdf6894-xx76v started at 2018-08-17 09:12:19 +0000 UTC (0+1 container statuses recorded)
Aug 17 19:03:54.254: INFO:      Container coredns ready: true, restart count 0
Aug 17 19:03:54.277: INFO:
Latency metrics for node k8s-master
Aug 17 19:03:54.277: INFO:
Logging node info for node k8s-node01
Aug 17 19:03:54.280: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-node01,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-node01,UID:980d8d67-9515-11e8-a804-fa163e420595,ResourceVersion:3510431,Generation:0,CreationTimestamp:2018-07-31 23:01:01 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/zone: ,kubernetes.io/hostname: k8s-node01,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"22:57:d2:53:57:f8"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.109,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143394816 0} {<nil>} 4046284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038537216 0} {<nil>} 3943884Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-08-17 19:03:45 +0000 UTC 2018-08-10 00:17:13 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-08-17 19:03:45 +0000 UTC 2018-08-10 00:17:13 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-08-17 19:03:45 +0000 UTC 2018-08-10 00:17:13 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-08-17 19:03:45 +0000 UTC 2018-07-31 23:01:01 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-08-17 19:03:45 +0000 UTC 2018-08-10 00:17:23 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.109} {Hostname k8s-node01}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817a385b9de241668e47cd87cda24f47,SystemUUID:817A385B-9DE2-4166-8E47-CD87CDA24F47,BootID:89bf8417-1b59-4778-b31e-dcda7893ef77,KernelVersion:4.4.0-130-generic,OSImage:Ubuntu 16.04.4 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[humblec/glusterdynamic-provisioner:v1.0] 373281573} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[quay.io/kubernetes_incubator/nfs-provisioner:v1.0.9] 332415371} {[gcr.io/kubernetes-e2e-test-images/volume-nfs:0.8] 247157334} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils-amd64:1.0] 195659796} {[k8s.gcr.io/resource_consumer:beta] 132805424} {[k8s.gcr.io/nginx-slim:0.8] 110487599} {[nginx:latest] 108975101} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-proxy-amd64:v1.11.0] 97772373} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3 gcr.io/kubernetes-e2e-test-images/resource-consumer:1.3] 49707607} {[quay.io/k8scsi/csi-attacher:v0.2.0] 45644524} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[quay.io/k8scsi/csi-provisioner:v0.2.1] 45078229} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[quay.io/k8scsi/driver-registrar:v0.2.0] 42385441} {[k8scloudprovider/cinder-provisioner:latest] 28582964} {[gcr.io/kubernetes-e2e-test-images/nettest-amd64:1.0] 27413498} {[quay.io/k8scsi/hostpathplugin:v0.2.0] 17287699} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/dnsutils-amd64:1.0] 9030162} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/resource-consumer/controller-amd64:1.0] 5902947} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver-amd64:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter-amd64:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness-amd64:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/fakegitserver-amd64:1.0] 4608683} {[k8s.gcr.io/k8s-dns-dnsmasq-amd64:1.14.5] 4324973} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester-amd64:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/port-forward-tester-amd64:1.0] 1992230} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user-amd64:1.0] 1450451} {[busybox:latest] 1162769} {[k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Aug 17 19:03:54.280: INFO:
Logging kubelet events for node k8s-node01
Aug 17 19:03:54.283: INFO:
Logging pods the kubelet thinks is on node k8s-node01
Aug 17 19:03:54.294: INFO: kube-proxy-hxp7z started at 2018-07-31 23:08:51 +0000 UTC (0+1 container statuses recorded)
Aug 17 19:03:54.294: INFO:      Container kube-proxy ready: true, restart count 2
Aug 17 19:03:54.294: INFO: kube-flannel-ds-tllws started at 2018-08-17 09:12:53 +0000 UTC (1+1 container statuses recorded)
Aug 17 19:03:54.294: INFO:      Init container install-cni ready: true, restart count 0
Aug 17 19:03:54.294: INFO:      Container kube-flannel ready: true, restart count 0
Aug 17 19:03:54.431: INFO:
Latency metrics for node k8s-node01
STEP: Dumping a list of prepulled images on each node...
Aug 17 19:03:54.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-perf-vvpbc" for this suite.
Aug 17 19:04:00.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 17 19:04:00.591: INFO: namespace: e2e-tests-kubelet-perf-vvpbc, resource: bindings, ignored listing per whitelist
Aug 17 19:04:00.609: INFO: namespace e2e-tests-kubelet-perf-vvpbc deletion completed in 6.158072317s
[AfterEach] [k8s.io] [sig-node] Kubelet [Serial] [Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:218
Aug 17 19:04:00.740: INFO: runtime operation error metrics:
node "k8s-master" runtime operation error rate:
operation "list_images": total - 308; error rate - 0.000000; timeout rate - 0.000000
operation "pull_image": total - 0; error rate - NaN; timeout rate - NaN
operation "start_container": total - 0; error rate - NaN; timeout rate - NaN
operation "stop_container": total - 0; error rate - NaN; timeout rate - NaN
operation "remove_container": total - 0; error rate - NaN; timeout rate - NaN
operation "inspect_container": total - 0; error rate - NaN; timeout rate - NaN
operation "info": total - 0; error rate - NaN; timeout rate - NaN
operation "inspect_image": total - 4; error rate - 0.000000; timeout rate - 0.000000
operation "list_containers": total - 4402; error rate - 0.000000; timeout rate - 0.000000
operation "create_container": total - 0; error rate - NaN; timeout rate - NaN
operation "version": total - 362; error rate - 0.000000; timeout rate - 0.000000

node "k8s-node01" runtime operation error rate:
operation "inspect_image": total - 4; error rate - 0.000000; timeout rate - 0.000000
operation "start_container": total - 0; error rate - NaN; timeout rate - NaN
operation "stop_container": total - 0; error rate - NaN; timeout rate - NaN
operation "create_container": total - 0; error rate - NaN; timeout rate - NaN
operation "list_images": total - 308; error rate - 0.000000; timeout rate - 0.000000
operation "inspect_container": total - 0; error rate - NaN; timeout rate - NaN
operation "remove_container": total - 0; error rate - NaN; timeout rate - NaN
operation "version": total - 362; error rate - 0.000000; timeout rate - 0.000000
operation "info": total - 0; error rate - NaN; timeout rate - NaN
operation "list_containers": total - 4418; error rate - 0.000000; timeout rate - 0.000000
operation "pull_image": total - 0; error rate - NaN; timeout rate - NaN

~ Failure [1206.861 seconds]
[k8s.io] [sig-node] Kubelet [Serial] [Slow]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:679
  [k8s.io] [sig-node] regular resource usage tracking
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:679
    resource tracking for 0 pods per node [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:270

    Aug 17 19:03:54.227: CPU usage exceeding limits:
     node k8s-master:
     container "kubelet": expected 50th% usage < 0.100; got 0.115
    node k8s-node01:
     container "kubelet": expected 50th% usage < 0.100; got 0.159

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:190
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 17 19:04:00.741: INFO: Running AfterSuite actions on all node
Aug 17 19:04:00.741: INFO: Running AfterSuite actions on node 1

Summarizing 1 Failure:

[Fail] [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking [It] resource tracking for 0 pods per node
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:190

Ran 1 of 999 Specs in 1206.942 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 998 Skipped --- FAIL: TestE2E (1206.97s)
FAIL

Ginkgo ran 1 suite in 20m7.180263722s
Test Suite Failed
!!! Error in ./hack/ginkgo-e2e.sh:143
  Error in ./hack/ginkgo-e2e.sh:143. '"${ginkgo}" "${ginkgo_args[@]:+${ginkgo_args[@]}}" "${e2e_test}" -- "${auth_config[@]:+${auth_config[@]}}" --ginkgo.flakeAttempts="${FLAKE_ATTEMPTS}" --host="${KUBE_MASTER_URL}" --provider="${KUBERNETES_PROVIDER}" --gce-project="${PROJECT:-}" --gce-zone="${ZONE:-}" --gce-region="${REGION:-}" --gce-multizone="${MULTIZONE:-false}" --gke-cluster="${CLUSTER_NAME:-}" --kube-master="${KUBE_MASTER:-}" --cluster-tag="${CLUSTER_ID:-}" --cloud-config-file="${CLOUD_CONFIG:-}" --repo-root="${KUBE_ROOT}" --node-instance-group="${NODE_INSTANCE_GROUP:-}" --prefix="${KUBE_GCE_INSTANCE_PREFIX:-e2e}" --network="${KUBE_GCE_NETWORK:-${KUBE_GKE_NETWORK:-e2e}}" --node-tag="${NODE_TAG:-}" --master-tag="${MASTER_TAG:-}" --cluster-monitoring-mode="${KUBE_ENABLE_CLUSTER_MONITORING:-standalone}" --prometheus-monitoring="${KUBE_ENABLE_PROMETHEUS_MONITORING:-false}" ${KUBE_CONTAINER_RUNTIME:+"--container-runtime=${KUBE_CONTAINER_RUNTIME}"} ${MASTER_OS_DISTRIBUTION:+"--master-os-distro=${MASTER_OS_DISTRIBUTION}"} ${NODE_OS_DISTRIBUTION:+"--node-os-distro=${NODE_OS_DISTRIBUTION}"} ${NUM_NODES:+"--num-nodes=${NUM_NODES}"} ${E2E_REPORT_DIR:+"--report-dir=${E2E_REPORT_DIR}"} ${E2E_REPORT_PREFIX:+"--report-prefix=${E2E_REPORT_PREFIX}"} "${@:-}"' exited with status 1
Call stack:
  1: ./hack/ginkgo-e2e.sh:143 main(...)
Exiting with status 1
2018/08/17 19:04:00 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=resource\stracking\sfor\s0\spods\sper\snode' finished in 20m7.223721731s
2018/08/17 19:04:00 main.go:309: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=resource\stracking\sfor\s0\spods\sper\snode: exit status 1]
2018/08/17 19:04:00 e2e.go:81: err: exit status 1
exit status 1
oomichi commented 6 years ago

テスト失敗部分のログ

    Aug 17 19:03:54.227: CPU usage exceeding limits:
     node k8s-master:
     container "kubelet": expected 50th% usage < 0.100; got 0.115
    node k8s-node01:
     container "kubelet": expected 50th% usage < 0.100; got 0.159

CPU 利用量が limits を超えた。 k8s-master: kubeletプロセスは 50番目の使用量が 10%以内を想定していたが 11.5%

    {
      "data": {
        "Perc05": 92.46606930769231,
        "Perc20": 106.3948092,
        "Perc50": 115.37916263636363,
        "Perc70": 122.1405735,
        "Perc90": 133.91659089473683,
        "Perc95": 136.260819,
        "Perc99": 143.336937
      },
      "unit": "mCPU",
      "labels": {
        "container": "kubelet",
        "datatype": "resource",
        "node": "master",
        "resource": "cpu"
      }
    },

k8s-node01: kubeletプロセスは 50番目の使用量が 10%以内を想定していたが 15.9%

    {
      "data": {
        "Perc05": 144.11299991666667,
        "Perc20": 152.86178764285717,
        "Perc50": 159.44052923529412,
        "Perc70": 165.019704,
        "Perc90": 172.12953905,
        "Perc95": 175.0731393,
        "Perc99": 184.31500608333334
      },
      "unit": "mCPU",
      "labels": {
        "container": "kubelet",
        "datatype": "resource",
        "node": "node",
        "resource": "cpu"
      }
    }
oomichi commented 6 years ago

普通に top コマンドで見ていても、kubelet の CPU 利用料が10%を超えることが多々ある。

$ top
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 9223 root      20   0  413064 240208  49304 S  11.6  5.9   1611:30 kube-controller
 1002 root      20   0  858956 139220  55800 S   9.9  3.4   1709:00 kubelet
 2687 root      20   0 1010972 886460  60264 S   8.3 21.9   1554:59 kube-apiserver

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1002 root      20   0  858956 139220  55800 S  15.3  3.4   1709:02 kubelet
 2687 root      20   0 1010972 896340  60264 S  13.6 22.2   1555:04 kube-apiserver
 9223 root      20   0  413064 240208  49304 S  13.0  5.9   1611:32 kube-controller
oomichi commented 6 years ago

なぜ、10%以内に収まることを想定しているのかを調べる。

f3f3e965cc993ec23d75f96318fd1ffd3eea59bd で 50th% を17% から25%に変更していたりする。 その後、483655a312b6ec36f5a05671e3100ee2a0c39be1 でも resource limits を 25%に拡張したりしている。

commit 483655a312b6ec36f5a05671e3100ee2a0c39be1
Author: Zhou Fang <zhoufang@google.com>
Date:   Mon Aug 29 18:32:16 2016 -0700

    increase latency and resource limit accroding to test results
oomichi commented 6 years ago

想定 CPU 利用率が Upstream CI向け専用になっているように見えるので、一先ず k/kubernetes/issues/67621 として Issue 登録。問い合わせ中。

oomichi commented 5 years ago

クリーンデプロイ環境では kubelet のCPU使用率は2%以下、一先ずクローズする。

oomichi commented 5 years ago

単独で走らせてみる。 問題なければ、e2e テスト並列実行時に分離実行されているか、確認する。

Resource usage on node "113-node02":
container cpu(cores) memory_working_set(MB) memory_rss(MB)
"runtime" 0.027      292.58                 91.45
"kubelet" 0.174      146.82                 127.39
"/"       0.242      1176.33                317.88

Resource usage on node "113-master":
container cpu(cores) memory_working_set(MB) memory_rss(MB)
"runtime" 0.006      123.10                 48.44
"kubelet" 0.021      77.26                  52.07
"/"       0.105      2579.45                1657.04

Resource usage on node "113-node01":
container cpu(cores) memory_working_set(MB) memory_rss(MB)
"runtime" 0.004      314.70                 100.87
"kubelet" 0.061      278.76                 229.60
"/"       0.071      1375.39                395.93

単独実行でも失敗した。

oomichi commented 5 years ago

以下のとおり、フィルターアウトした。

go run hack/e2e.go --provider=skeleton --test --test_args="--ginkgo.skip=.*(Feature)|(NFS)|(StatefulSet)|\[Disruptive\]|Dashboard|LoadBalancer\sservice|regular\sresource\susage\stracking.* --master-os-distro=ubuntu --node-os-distro=ubuntu" --check-version-skew=false