kubernetes-retired / cluster-api-bootstrap-provider-kubeadm

LEGACY REPO. NEW CODE IS https://github.com/kubernetes-sigs/cluster-api/tree/master/bootstrap/kubeadm
Apache License 2.0
62 stars 67 forks source link

Control plane init unlocking is happening in the wrong spot and is too noisy #233

Closed chuckha closed 4 years ago

chuckha commented 4 years ago

/kind bug

The control plane init locker unlock call happens in EVERY case when it should only happen in the control plane initialization code at the very beginning.

Because of this the lock is unnecessarily noisy. I think we can start by moving the ulocking to the initialization code path only and if the logs continue to be noisy we can definitely remove the info logs as they are Info logging when no helpful information is actually being given. For instance, we don't care that the control plane lock could not be found and yet we log it anyway.

/good-first-issue /help /priority important-soon /milestone v0.1.x

chuckha commented 4 years ago

cc @lokicity

chuckha commented 4 years ago

It appears #232 is already open

/lifecycle active /assign @accepting

k8s-ci-robot commented 4 years ago

@chuckha: GitHub didn't allow me to assign the following users: accepting.

Note that only kubernetes-sigs members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide

In response to [this](https://github.com/kubernetes-sigs/cluster-api-bootstrap-provider-kubeadm/issues/233#issuecomment-532244241): >It appears #232 is already open > >/lifecycle active >/assign @accepting Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
Lokicity commented 4 years ago

@chuckha @accepting Thank you for looking at this issue. In my case, looks like the reconciling logic stuck at I0917 06:18:06.486453 1 control_plane_init_mutex.go:112] init-locker "level"=0 "msg"="Control plane init lock not found, it may have been released already" "cluster-name"="capi-quickstart-bar" "configmap-name"="capi-quickstart-bar-lock" "namespace"="bar" and the worker node is unable to be created. I am also unable to find this configmap in any namespace. What is this configmap and lock used for?

chuckha commented 4 years ago

@Lokicity That log line can be ignored and should not be the thing preventing workers from joining or being created. Are you able to look at the cloud-init logs for the control plane? Perhaps is not being created correctly or it is unable to join nodes for some other reason?

chuckha commented 4 years ago

This got fixed by #232 /close

k8s-ci-robot commented 4 years ago

@chuckha: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/cluster-api-bootstrap-provider-kubeadm/issues/233#issuecomment-540784865): >This got fixed by #232 >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.