Closed FengXingYuXin closed 2 years ago
we check the code of clusterHealthCheck and find the bug in function thresholdAdjustedClusterStatus as followes: storedData.resultRun's updation must be in front of check
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
What happened: about below struct type ClusterHealthCheckConfig struct { Period time.Duration FailureThreshold int64 SuccessThreshold int64 Timeout time.Duration } we set Period 10s,FailureThreshold 100,SuccessThreshold 1,Timeout 60s。
but when kubefed check child k8s failed just only 1 time, the status of child k8s becomes notReady. What you expected to happen: the status of child k8s in kubefed becomes notReady after kubefed checks child k8s failed 10 times continuously.
How to reproduce it (as minimally and precisely as possible): version informations: 1)kubefed version: lastest 2) k8s version: maybe all versions, we use 1.11 3) kubefed contract child k8s use domain(kubefed->dns->lvs->nginx->child k8s apiserver)
just down(use reboot) one lvs node
Anything else we need to know?:
Environment:
kubectl version
)/kind bug