What happened:
I am trying to install latest development version own my cluster but for some reason the controller manager pod keeps complaining about following error,
2024-11-05T20:06:22Z INFO controller-runtime.webhook Starting webhook server
2024-11-05T20:06:22Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate-leaderworkerset-x-k8s-io-v1-leaderworkerset"}
2024-11-05T20:06:22Z INFO controller-runtime.builder Registering a validating webhook {"GVK": "leaderworkerset.x-k8s.io/v1, Kind=LeaderWorkerSet", "path": "/validate-leaderworkerset-x-k8s-io-v1-leaderworkerset"}
2024-11-05T20:06:22Z INFO controller-runtime.webhook Registering webhook {"path": "/validate-leaderworkerset-x-k8s-io-v1-leaderworkerset"}
2024-11-05T20:06:22Z INFO controller-runtime.certwatcher Updated current TLS certificate
2024-11-05T20:06:22Z INFO controller-runtime.builder Registering a mutating webhook {"GVK": "/v1, Kind=Pod", "path": "/mutate--v1-pod"}
2024-11-05T20:06:22Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate--v1-pod"}
2024-11-05T20:06:22Z INFO controller-runtime.webhook Serving webhook server {"host": "", "port": 9443}
2024-11-05T20:06:22Z INFO controller-runtime.builder Registering a validating webhook {"GVK": "/v1, Kind=Pod", "path": "/validate--v1-pod"}
2024-11-05T20:06:22Z INFO controller-runtime.certwatcher Starting certificate watcher
2024-11-05T20:06:22Z INFO controller-runtime.webhook Registering webhook {"path": "/validate--v1-pod"}
2024-11-05T20:06:22Z INFO Starting workers {"controller": "leaderworkerset", "controllerGroup": "leaderworkerset.x-k8s.io", "controllerKind": "LeaderWorkerSet", "worker count": 1}
2024/11/05 20:06:30 http: TLS handshake error from 10.129.0.22:41300: remote error: tls: bad certificate
Not sure what might be going wrong, any pointer is highly appreciated. Thanks.
What you expected to happen:
Manager pod to run without any error
How to reproduce it (as minimally and precisely as possible):
Simply executed kubectl apply --server-side -k github.com/kubernetes-sigs/lws/config/default?ref=main to get latest development version on my cluster.
Anything else we need to know?:
Environment:
Kubernetes version (use kubectl version):
$ kubectl version
Client Version: v1.32.0-alpha.0.1043+91859c4cd2fba4
Kustomize Version: v5.4.2
Server Version: v1.31.2
LWS version (use git describe --tags --dirty --always): v0.4.0-32-gcdfbe81
What happened: I am trying to install latest development version own my cluster but for some reason the controller manager pod keeps complaining about following error,
Not sure what might be going wrong, any pointer is highly appreciated. Thanks. What you expected to happen: Manager pod to run without any error
How to reproduce it (as minimally and precisely as possible): Simply executed
kubectl apply --server-side -k github.com/kubernetes-sigs/lws/config/default?ref=main
to get latest development version on my cluster.Anything else we need to know?:
Environment:
Kubernetes version (use
kubectl version
):LWS version (use
git describe --tags --dirty --always
):v0.4.0-32-gcdfbe81
Cloud provider or hardware configuration:
OS (e.g:
cat /etc/os-release
):Kernel (e.g.
uname -a
):Install tools:
Others: