oomichi / try-kubernetes

12 stars 5 forks source link

[Fail] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController [It] Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability #59

Closed oomichi closed 5 years ago

oomichi commented 5 years ago

ReplicationController だけ失敗している。 upstream では https://github.com/kubernetes/kubernetes/commit/cb6a8a25644729b6b28db084482eaa33c23abd71 で対応済み(deflake 済み)だった。 これをバックポートする。

oomichi commented 5 years ago

単独でテスト実行すると、パスしてしまう。

$ go run hack/e2e.go --provider=skeleton --test --test_args="--ginkgo.focus=ReplicationController\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand --master-os-distro=ubuntu --node-os-distro=ubuntu" --check-version-skew=false
...
Ran 1 of 998 Specs in 858.461 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 997 Skipped PASS

Ginkgo ran 1 suite in 14m18.732228793s
Test Suite Passed
2018/11/07 02:16:05 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=ReplicationController\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand --master-os-distro=ubuntu --node-os-distro=ubuntu' finished in 14m18.771152754s
2018/11/07 02:16:05 e2e.go:83: Done
oomichi commented 5 years ago

2回目の単独実行で再現した。 期待する Pod 数が3にも関わらず、4になってしまったため失敗した。

Nov  7 02:20:15.868: INFO: expecting there to be 3 replicas (are: 3)
Nov  7 02:20:25.868: INFO: expecting there to be 3 replicas (are: 3)
Nov  7 02:20:35.868: INFO: expecting there to be 3 replicas (are: 3)
Nov  7 02:20:35.894: INFO: RC rc: sending request to consume 0 MB
Nov  7 02:20:35.894: INFO: ConsumeMem URL: {https   192.168.1.115:6443 /api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-7nwxh/services/rc-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100 }
Nov  7 02:20:35.966: INFO: RC rc: sending request to consume 0 of custom metric QPS
Nov  7 02:20:35.966: INFO: ConsumeCustomMetric URL: {https   192.168.1.115:6443 /api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-7nwxh/services/rc-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 }
Nov  7 02:20:36.485: INFO: RC rc: sending request to consume 250 millicores
Nov  7 02:20:36.485: INFO: ConsumeCPU URL: {https   192.168.1.115:6443 /api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-7nwxh/services/rc-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=250&requestSizeMillicores=100 }
Nov  7 02:20:45.868: INFO: expecting there to be 3 replicas (are: 3)
Nov  7 02:20:55.868: INFO: expecting there to be 3 replicas (are: 3)
Nov  7 02:21:05.869: INFO: expecting there to be 3 replicas (are: 3)
Nov  7 02:21:05.899: INFO: RC rc: sending request to consume 0 MB
Nov  7 02:21:05.899: INFO: ConsumeMem URL: {https   192.168.1.115:6443 /api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-7nwxh/services/rc-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100 }
Nov  7 02:21:05.970: INFO: RC rc: sending request to consume 0 of custom metric QPS
Nov  7 02:21:05.970: INFO: ConsumeCustomMetric URL: {https   192.168.1.115:6443 /api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-7nwxh/services/rc-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 }
Nov  7 02:21:06.580: INFO: RC rc: sending request to consume 250 millicores
Nov  7 02:21:06.580: INFO: ConsumeCPU URL: {https   192.168.1.115:6443 /api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-7nwxh/services/rc-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=250&requestSizeMillicores=100 }
Nov  7 02:21:15.872: INFO: expecting there to be 3 replicas (are: 3)
Nov  7 02:21:25.868: INFO: expecting there to be 3 replicas (are: 3)
Nov  7 02:21:35.868: INFO: expecting there to be 3 replicas (are: 3)
Nov  7 02:21:35.904: INFO: RC rc: sending request to consume 0 MB
Nov  7 02:21:35.904: INFO: ConsumeMem URL: {https   192.168.1.115:6443 /api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-7nwxh/services/rc-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100 }
Nov  7 02:21:35.975: INFO: RC rc: sending request to consume 0 of custom metric QPS
Nov  7 02:21:35.975: INFO: ConsumeCustomMetric URL: {https   192.168.1.115:6443 /api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-7nwxh/services/rc-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 }
Nov  7 02:21:36.634: INFO: RC rc: sending request to consume 250 millicores
Nov  7 02:21:36.635: INFO: ConsumeCPU URL: {https   192.168.1.115:6443 /api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-7nwxh/services/rc-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=250&requestSizeMillicores=100 }
Nov  7 02:21:45.868: INFO: expecting there to be 3 replicas (are: 4)
Nov  7 02:21:45.868: INFO: Unexpected error occurred: number of replicas changed unexpectedly
STEP: Removing consuming RC rc
Nov  7 02:21:45.872: INFO: RC rc: stopping metric consumer
Nov  7 02:21:45.872: INFO: RC rc: stopping CPU consumer
...
~ Failure [365.309 seconds]
[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/framework.go:22
  [sig-autoscaling] [Serial] [Slow] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/framework.go:22
    Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:61

    Expected error:
        <*errors.errorString | 0xc4203715d0>: {
            s: "number of replicas changed unexpectedly",
        }
        number of replicas changed unexpectedly
    not to have occurred

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:125
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSNov  7 02:22:15.376: INFO: Running AfterSuite actions on all node
Nov  7 02:22:15.376: INFO: Running AfterSuite actions on node 1

Summarizing 1 Failure:

[Fail] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController [It] Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:125

Ran 1 of 998 Specs in 365.378 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 997 Skipped --- FAIL: TestE2E (365.41s)
FAIL

Ginkgo ran 1 suite in 6m5.640052941s
Test Suite Failed
oomichi commented 5 years ago

そもそも 02:20:15.868 時点で expected 3 に達しているのに、失敗直前の 02:21:35.868 まで1分以上、次のテストプロセス(5へのスケールアウト)に進まずに待っている? → 期待した replicas 数に一定時間(タイムアウトまで)待つことがテストの目的なので、この動作はテストとして正しい。 → checkStability が true になっているのは ReplicationController だけで、Deloyment, RS はなっていない。

test/e2e/autoscaling/horizontal_pod_autoscaling.go

124         if scaleTest.firstScaleStasis > 0 {
125                 rc.EnsureDesiredReplicas(int(scaleTest.firstScale), scaleTest.firstScaleStasis)
126         }

実態はこれ。 L380 で「期待した replicas 数を確認した。タイムアウトまでポーリングを続ける」と書いてある。 また L385 でも replicas 数は安定しているというログを出しているので、一定時間期待した replicas 数である事を確認するテストである模様

372 func (rc *ResourceConsumer) EnsureDesiredReplicas(desiredReplicas int, duration time.Duration) {
373         interval := 10 * time.Second
374         err := wait.PollImmediate(interval, duration, func() (bool, error) {
375                 replicas := rc.GetReplicas()
376                 framework.Logf("expecting there to be %d replicas (are: %d)", desiredReplicas, replicas)
377                 if replicas != desiredReplicas {
378                         return false, fmt.Errorf("number of replicas changed unexpectedly")
379                 } else {
380                         return false, nil // Expected number of replicas found. Continue polling until timeout.
381                 }
382         })
383         // The call above always returns an error, but if it is timeout, it's OK (condition satisfied all the time).
384         if err == wait.ErrWaitTimeout {
385                 framework.Logf("Number of replicas was stable over %v", duration)
386                 return
387         }
388         framework.ExpectNoErrorWithOffset(1, err)
389 }
oomichi commented 5 years ago

てことは、テスト中に replicas 数が安定しないことが問題。4に変化したきっかけを調べる。 CPU負荷としては 02:21:06.580 に与えた時点で安定 02:21:36.634 に与えた時点で4に変化。その間は30秒54ミリ秒 CPU負荷のパラメータは

Nov  7 02:21:06.580: INFO: RC rc: sending request to consume 250 millicores
Nov  7 02:21:06.580: INFO: ConsumeCPU URL: {https   192.168.1.115:6443 /api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-7nwxh/services/rc-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=250&requestSizeMillicores=100 }
Nov  7 02:21:15.872: INFO: expecting there to be 3 replicas (are: 3)
Nov  7 02:21:25.868: INFO: expecting there to be 3 replicas (are: 3)
Nov  7 02:21:35.868: INFO: expecting there to be 3 replicas (are: 3)
...
Nov  7 02:21:36.634: INFO: RC rc: sending request to consume 250 millicores
Nov  7 02:21:36.635: INFO: ConsumeCPU URL: {https   192.168.1.115:6443 /api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-7nwxh/services/rc-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=250&requestSizeMillicores=100 }
Nov  7 02:21:45.868: INFO: expecting there to be 3 replicas (are: 4)
oomichi commented 5 years ago

そもそも Debug 用に不一致が発生したときに HPA の状態が確認できるようにしたい。

oomichi commented 5 years ago

すでに k/kubernetes/pull/70739 で対応済み。

oomichi commented 5 years ago

Backport したところ、通るようになった。