Closed 0x4c6565 closed 8 months ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
/remove-lifecycle rotten
@0x4c6565 You may propose a PR to discuss this with the community force: false
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@0x4c6565 You may propose a PR to discuss this with the community
force: false
I've run into these issues myself when performing upgrades @floryut. Shall I put in a PR to switch to force: false
as you've suggested?
Any ideas why it was forced in the first place?
@0x4c6565 You may propose a PR to discuss this with the community
force: false
I've run into these issues myself when performing upgrades @floryut. Shall I put in a PR to switch to
force: false
as you've suggested?Any ideas why it was forced in the first place?
No clue why it was done like that in the first place 🤷
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
/reopen
@0x4c6565: Reopened this issue.
I've now added a WIP PR for this issue
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
When the CoreDNS deployment is applied to the cluster, the
--force
flag is used, resulting in non-graceful termination of pods rather than rolling update:This causes significant DNS disruptions in our clusters.
We can see that the deployment is applied using the custom
kube
module with a state oflatest
:https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubernetes-apps/ansible/tasks/main.yml#L39-L46
latest state calls the
replace
function with no arguments, resulting in the defaultforce
value oftrue
:https://github.com/kubernetes-sigs/kubespray/blob/master/library/kube.py#L186-L191
Environment:
Cloud provider or hardware configuration: Bare metal VMs
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
):Version of Ansible (
ansible --version
):2.8.11
Version of Python (
python --version
):3.6.6
Kubespray version (commit) (
git rev-parse --short HEAD
):Network plugin used:
Calico
Command used to invoke ansible:
ansible-playbook cluster.yml -t apps