kubernetes-sigs / cluster-api-provider-nested

Cluster API Provider for Nested Clusters
Apache License 2.0
299 stars 65 forks source link

Webhook self-signed certificate issues for Virtual Cluster #215

Closed crazywill closed 2 years ago

crazywill commented 3 years ago

In #145 and #161, we use self-signed certificate for ValidatingWebhookConfiguration. However, when vc-manager has multiple replicas, every vc-manager will generate a new ValidatingWebhookConfiguration and delete old ValidatingWebhookConfiguration. That's will case the webhook raise certificate error except for the latest vc-manager pod.

https://virtualcluster-webhook-service.kube-system.svc:9443/validate-tenancy-x-k8s-io-v1alpha1-virtualcluster?timeout=30s": x509: certificate signed by unknown authority

christopherhein commented 3 years ago

/kind bug

vincent-pli commented 3 years ago

@crazywill Just curious, why vc-manager has multiple replicas? it's a controller, if there are more than one replicas, everyone will try to handle same object in its reconcile, that's not acceptable.

If you want to handle heavy load in the controller, you should increase the number of concurrent thread for the reconcile rather than make more replicas.

crazywill commented 3 years ago

@vincent-pli Thank you for your reply. As a controller, vc-manager runs in leader election mode, so it works well while having multiple replicas. But as a webhook, every replicas use its own caBundle, only the latest one can handle request.

vincent-pli commented 3 years ago

@crazywill I'm afraid you are right, I try to fix it but I do not want to change too much. Seems controller-runtime consider the case, see here: https://github.com/kubernetes-sigs/controller-runtime/issues/356

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 2 years ago

@k8s-triage-robot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-nested/issues/215#issuecomment-1037188229): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues and PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue or PR with `/reopen` >- Mark this issue or PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.