kubewarden / helm-charts

Helm charts for the Kubewarden project
Apache License 2.0
26 stars 15 forks source link

Unable to deploy kubewarden-defaults: failed to verify certificate: x509: certificate signed by unknown authority #453

Closed mueller-ma closed 1 month ago

mueller-ma commented 1 month ago

Is there an existing issue for this?

Current Behavior

I tried the commands from https://docs.kubewarden.io/quick-start to install Kubewarden, but the deployment of kubewarden-defaults fails. I had already cert-manager installed and as you can see at the end of the output, both issuer and certificate have been created:

$ helm repo add kubewarden https://charts.kubewarden.io
"kubewarden" already exists with the same configuration, skipping
$ helm repo update kubewarden
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "kubewarden" chart repository
Update Complete. ⎈Happy Helming!⎈
$ helm upgrade --install --wait -n kubewarden --create-namespace kubewarden-crds kubewarden/kubewarden-crds
Release "kubewarden-crds" has been upgraded. Happy Helming!
NAME: kubewarden-crds
LAST DEPLOYED: Tue May 28 08:37:03 2024
NAMESPACE: kubewarden
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
Kubewarden CRDs now available: `clusteradmissionpolicies.policies.kubewarden.io`,
`admissionpolicies.policies.kubewarden.io`, `policyservers.policies.kubewarden.io`.

Policy report CRDs now available: `policyreports.wgpolicyk8s.io`,
`clusterpolicyreports.wgpolicyk8s.io`.
$ helm upgrade --install --wait -n kubewarden kubewarden-controller kubewarden/kubewarden-controller
Release "kubewarden-controller" has been upgraded. Happy Helming!
NAME: kubewarden-controller
LAST DEPLOYED: Tue May 28 08:37:20 2024
NAMESPACE: kubewarden
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
You can now start defining admission policies by using the cluster-wide
`clusteradmissionpolicies.policies.kubewarden.io` or the namespaced
`admissionpolicies.policies.kubewarden.io` resources.

For more information check out https://docs.kubewarden.io.
$ helm upgrade --install --wait -n kubewarden kubewarden-defaults kubewarden/kubewarden-defaults
Error: UPGRADE FAILED: failed to create resource: Internal error occurred: failed calling webhook "mpolicyserver.kb.io": failed to call webhook: Post "https://kubewarden-controller-webhook-service.kubewarden.svc:443/mutate-policies-kubewarden-io-v1-policyserver?timeout=10s": tls: failed to verify certificate: x509: certificate signed by unknown authority

$ kubectl get issuers.cert-manager.io
NAME                                      READY   AGE
kubewarden-controller-selfsigned-issuer   True    17h
$ kubectl get certificate
NAME                                 READY   SECRET                AGE
kubewarden-controller-serving-cert   True    webhook-server-cert   17h

Expected Behavior

Installation works

Steps To Reproduce

  1. Execute the commands from quick start

Environment

- OS (Nodes): Ubuntu 22.04
- Kubernetes: v1.27.11+rke2r1

Anything else?

No response

jvanz commented 1 month ago

Hi @mueller-ma ! Thanks for the report!

I'm trying to simulate your issue using a RKE2 v1.28.10+rke2r1:

/var/lib/rancher/rke2/bin/kubectl --kubeconfig ./rke2-kubeconfig.yaml version
Client Version: v1.28.10+rke2r1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.10+rke2r1

Unfortunately, I was not able to simulate the issue:

helm --kubeconfig ./rke2-kubeconfig.yaml upgrade --install --wait -n kubewarden kubewarden-defaults kubewarden/kubewarden-defaults
Release "kubewarden-defaults" has been upgraded. Happy Helming!
NAME: kubewarden-defaults
LAST DEPLOYED: Mon Jun  3 16:51:10 2024
NAMESPACE: kubewarden
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
You now have a `PolicyServer` named `default` running in your cluster.
It is ready to run any `clusteradmissionpolicies.policies.kubewarden.io` or
`admissionpolicies.policies.kubewarden.io` resources.

For more information check out https://docs.kubewarden.io/quick-start.
Discover ready to use policies at https://artifacthub.io/packages/search?kind=13.

/var/lib/rancher/rke2/bin/kubectl --kubeconfig ./rke2-kubeconfig.yaml get pods -n kubewarden
NAME                                     READY   STATUS    RESTARTS   AGE
kubewarden-controller-577857d487-8lqh7   1/1     Running   0          9m37s
policy-server-default-559f5b45fd-qxn44   1/1     Running   0          8m35s

I have some question that I would like to ask you to understand better your situation. In the commands that you've shared with us, I can see that you use the upgrade --install commands and the successful helm commands show the revision of the installation as 2. Witch tells me that you were trying to install by the second time, is that correct? I've tried to reinstall the kubewarden-defaults as well. But everything goes fine so far

Furthermore, I notice that your issuers and certificates are 17h old. Assuming that you ran that commands right after the failed installation, could be those certificates and issuer be a left over an previous installation and is making the current installation fail? Can you also confirm your cert-manager version?

mueller-ma commented 1 month ago

I did the initial installation and got the certificate error. Then I run the update commands 17h later to be able to copy the error to this issue. I already tried to reinstall Kubewarden from scratch, but that didn't help.

cert-manager is at version v1.14.5.

In the meantime the Kubernetes cluster got updated to v1.28.10+rke2r1, but the error is the same. I tried a fresh installation:

$ helm install --wait -n kubewarden kubewarden-defaults kubewarden/kubewarden-defaults
Error: INSTALLATION FAILED: 1 error occurred:
        * Internal error occurred: failed calling webhook "mpolicyserver.kb.io": failed to call webhook: Post "https://kubewarden-controller-webhook-service.kubewarden.svc:443/mutate-policies-kubewarden-io-v1-policyserver?timeout=10s": tls: failed to verify certificate: x509: certificate signed by unknown authority
jvanz commented 1 month ago

Thanks for the feedback @mueller-ma!

Let me share what I have in mind and ask you for some more info. Considering the error message that you shared with us. It looks like the during the installation of the kubewarden-defaults Helm chart the API server send a request to the controller webhook (mpolicyserver.kb.io ) to validate/mutate the policy server resource that should be applied in the cluster. But it fails for some reason. Therefore, I would like to check two main things: 1. cert-manager properly configured the certificates in the webhooks configuration; 2. Is there some configuration in the RKE2 cluster that is messing up with things.

For that, let's get some data to start thinking more about that. Please, share if us the following commands output:

kubectl get secrets -n kubewarden webhook-server-cert -o yaml
kubectl get validatingwebhookconfigurations  kubewarden-controller-validating-webhook-configuration -o yaml
kubectl get mutatingwebhookconfigurations kubewarden-controller-mutating-webhook-configuration -o yaml
kubectl get pods -n kube-system -o yaml <api server pod name>

Furthermore, can you try to reinstall and collect the logs from the API server and Kubewarden controller. I would like to have more context of what's going on during the installation.

Another question, do you have some customization in the RKE2 installation? Do you have set any option in the config file or command line during installation? I'm asking this because I would like to have an environment closest of yours.

mueller-ma commented 1 month ago

I can share some insights about this cluster:

# cat /etc/rancher/rke2/config.yaml

server: https://<redacted>:9345
token: <redacted>
data-dir: /var/lib/rancher/rke2
cni: canal
tls-san:
  - cluster.local
  - <redacted>
snapshotter: overlayfs
node-name: <redacted>

Here's the output of the commands you requested. I replace the name of the node by <node-name>.

```` $ kubectl get secrets -n kubewarden webhook-server-cert -o yaml apiVersion: v1 data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUROakNDQWg2Z0F3SUJBZ0lSQVAzdksyeEZqMmxMdVMra1ZUaEhCanN3RFFZSktvWklodmNOQVFFTEJRQXcKQURBZUZ3MHlOREEyTURRd05qSXdNakJhRncweU5EQTVNREl3TmpJd01qQmFNQUF3Z2dFaU1BMEdDU3FHU0liMwpEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUM2N3l0L21FQXFhZzlBbUt0VVFBRWxISmtRT3NtZVRqY0llTnNSCnB4Y25UYXJaOURpSjhCRXRFNjRiblYrOGlVenFwbUNLcGlqcFIwSFBVa2pxcG1BL0pQNVFnSThwNWVTSWJ1bTcKS2tRU1hRN2ZCWHlscHJITXk1dnJsK0pHZFc3Znc5aUNNM2dvRW9qRG04OXJlMHZLU3RBTEt3MnhEWVNQTlRJdQplQ3NQUERVVW50dVBxVWtoQmdTVWt1aWJhM3hHZlJXN0hITTVVTERWc3YyemZIalBSci9nQXVmRU03dHZvQU9MCm9XRGpFaHdnNHppT1BmeEdwTXpjSVZzUnhHQzZsVEh3eDExdkdYQkZpVmg5OHJNZXVGMG43NDVPa0k2WVRJaVgKNE1vRTkrOXcyeTBNSGIyZ3lpbnAzUGlxNzhiaElnK2Z4Wjc5UndwbEFNL0Irc0pSQWdNQkFBR2pnYW93Z2FjdwpEZ1lEVlIwUEFRSC9CQVFEQWdXZ01Bd0dBMVVkRXdFQi93UUNNQUF3Z1lZR0ExVWRFUUVCL3dSOE1IcUNOR3QxClltVjNZWEprWlc0dFkyOXVkSEp2Ykd4bGNpMTNaV0pvYjI5ckxYTmxjblpwWTJVdWEzVmlaWGRoY21SbGJpNXoKZG1PQ1FtdDFZbVYzWVhKa1pXNHRZMjl1ZEhKdmJHeGxjaTEzWldKb2IyOXJMWE5sY25acFkyVXVhM1ZpWlhkaApjbVJsYmk1emRtTXVZMngxYzNSbGNpNXNiMk5oYkRBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQXNPeStPWVVMCktHUnkyMncrNWJHUUZuK1NaT0RkdUVIbi8xUVlGNVdiREVFRFd5NUxDTU1TNERBTUdIZWFBbmdqN2FLKzE2MG8KakQxc3ROdDlFQlRlaUE3UHBCU0JJUkdkeTJNaHR3cm9ZN1NQS3h1dmFtMUdCT1p2Ri9YVW42YWNiMGJWaTZJbgowL25vUWNKVjRNcHBNTFgwcVBmNmlFcjVkVVEwTmc0YTJLN3RiSmZiclF4SHVXa2VWaDVOMWEySlZVekhSRFRlCndKdWdJdGZCNnQ4RjJtRHZsbXRnRWtjbFZSVWRiVFVvalpBcjExNk5VWVl5U2V3Q0w4NFZVbVRyOHN0NEhSYmIKL3lVcUsxMmVsNnQyM0p6S3I0bzBWUklrV0VxRThvNENJanR6VDRXbUNTaS8vREpyemRrYU94WTY2RStSUDkyOQpLSWc1UEQ1aVcxVWxXdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUROakNDQWg2Z0F3SUJBZ0lSQVAzdksyeEZqMmxMdVMra1ZUaEhCanN3RFFZSktvWklodmNOQVFFTEJRQXcKQURBZUZ3MHlOREEyTURRd05qSXdNakJhRncweU5EQTVNREl3TmpJd01qQmFNQUF3Z2dFaU1BMEdDU3FHU0liMwpEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUM2N3l0L21FQXFhZzlBbUt0VVFBRWxISmtRT3NtZVRqY0llTnNSCnB4Y25UYXJaOURpSjhCRXRFNjRiblYrOGlVenFwbUNLcGlqcFIwSFBVa2pxcG1BL0pQNVFnSThwNWVTSWJ1bTcKS2tRU1hRN2ZCWHlscHJITXk1dnJsK0pHZFc3Znc5aUNNM2dvRW9qRG04OXJlMHZLU3RBTEt3MnhEWVNQTlRJdQplQ3NQUERVVW50dVBxVWtoQmdTVWt1aWJhM3hHZlJXN0hITTVVTERWc3YyemZIalBSci9nQXVmRU03dHZvQU9MCm9XRGpFaHdnNHppT1BmeEdwTXpjSVZzUnhHQzZsVEh3eDExdkdYQkZpVmg5OHJNZXVGMG43NDVPa0k2WVRJaVgKNE1vRTkrOXcyeTBNSGIyZ3lpbnAzUGlxNzhiaElnK2Z4Wjc5UndwbEFNL0Irc0pSQWdNQkFBR2pnYW93Z2FjdwpEZ1lEVlIwUEFRSC9CQVFEQWdXZ01Bd0dBMVVkRXdFQi93UUNNQUF3Z1lZR0ExVWRFUUVCL3dSOE1IcUNOR3QxClltVjNZWEprWlc0dFkyOXVkSEp2Ykd4bGNpMTNaV0pvYjI5ckxYTmxjblpwWTJVdWEzVmlaWGRoY21SbGJpNXoKZG1PQ1FtdDFZbVYzWVhKa1pXNHRZMjl1ZEhKdmJHeGxjaTEzWldKb2IyOXJMWE5sY25acFkyVXVhM1ZpWlhkaApjbVJsYmk1emRtTXVZMngxYzNSbGNpNXNiMk5oYkRBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQXNPeStPWVVMCktHUnkyMncrNWJHUUZuK1NaT0RkdUVIbi8xUVlGNVdiREVFRFd5NUxDTU1TNERBTUdIZWFBbmdqN2FLKzE2MG8KakQxc3ROdDlFQlRlaUE3UHBCU0JJUkdkeTJNaHR3cm9ZN1NQS3h1dmFtMUdCT1p2Ri9YVW42YWNiMGJWaTZJbgowL25vUWNKVjRNcHBNTFgwcVBmNmlFcjVkVVEwTmc0YTJLN3RiSmZiclF4SHVXa2VWaDVOMWEySlZVekhSRFRlCndKdWdJdGZCNnQ4RjJtRHZsbXRnRWtjbFZSVWRiVFVvalpBcjExNk5VWVl5U2V3Q0w4NFZVbVRyOHN0NEhSYmIKL3lVcUsxMmVsNnQyM0p6S3I0bzBWUklrV0VxRThvNENJanR6VDRXbUNTaS8vREpyemRrYU94WTY2RStSUDkyOQpLSWc1UEQ1aVcxVWxXdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBdXU4cmY1aEFLbW9QUUppclZFQUJKUnlaRURySm5rNDNDSGpiRWFjWEowMnEyZlE0CmlmQVJMUk91RzUxZnZJbE02cVpnaXFZbzZVZEJ6MUpJNnFaZ1B5VCtVSUNQS2VYa2lHN3B1eXBFRWwwTzN3VjgKcGFheHpNdWI2NWZpUm5WdTM4UFlnak40S0JLSXc1dlBhM3RMeWtyUUN5c05zUTJFanpVeUxuZ3JEencxRko3YgpqNmxKSVFZRWxKTG9tMnQ4Um4wVnV4eHpPVkN3MWJMOXMzeDR6MGEvNEFMbnhETzdiNkFEaTZGZzR4SWNJT000CmpqMzhScVRNM0NGYkVjUmd1cFV4OE1kZGJ4bHdSWWxZZmZLekhyaGRKKytPVHBDT21FeUlsK0RLQlBmdmNOc3QKREIyOW9Nb3A2ZHo0cXUvRzRTSVBuOFdlL1VjS1pRRFB3ZnJDVVFJREFRQUJBb0lCQUdYY00yaW9qclpONnBlNQpXUXBrZ2ZzMTlSWEo0dGtYSjVlL095Z0lVMjZBUE12Yzd2NEN5V2sxb3hhN0QxTE53aExPckNhQTJpUWJSdm53CmpYa0hSY1RDcEswN0VFZTFWRzBmZXM4WS9kUy96bjJxSUx5bTg1VnprVGUwSUlGaU5oTktSV3pWSFBGQkFETU8KY056Ulo0QUllZ3JMMy84TkxhRlhURXZVQVNxZ0paQVdvMGxVYjhLUzR1K1M0UHovNmlsaTNYeHNCZmMyVWRkWApUV0d2SkJTRlpEK01md1R3Qm1haGl0QlNEcGlxY3NtTVZOKzlXTFdWQXBCUUFGL0FvdUhCbHNIR2ZNTjZMUnJPCmYrcDlJQzNrdmFhSXR6cHdBa2gvTXdQVGE3d3JrTmRNUGt6aHZrUXA4Nm9BU0RjRGhpNllQNDRBRzh0MURhZmkKQ1RPV094RUNnWUVBeHNTUWhRY1lndHErYWVRMkpka0dwbUF5ZGNEbEFZdUdobTZjYjVvWEZEN2ZhakxkMXVSYwpXSFVmUWZ2aXhwbXJaY2l3RVpMQmVTa0M5T1lvemRjQ1lIb2x6WmpCVUx0YkIwQzg1ZGhFd0VxMU1ZWmxkSEpCCkJvTzg1U1JGai9WZmp6NGRzNGNXVWhXTHRpMTNjTFVLTjJYTm9WVWxqMEhtSWR5Tjh3VTJLRFVDZ1lFQThNSlcKVzkzL0ZjK01PQkQ5Y3cxSkd5TzJYdXQxdHlZT1oxcGlXOFNrK1M0VEx2VjJ6SVNpV2JnVjhuSVJOUzZUWmdGaApHNGErUUxMWmtJWFhnM21Jb0NyVytiVVppWThWYlNtUDllOGhHQXczR24wZ2tOTEoreitWZkFSQW9CRVR0STlaCitDcmVValFRZDJSc1Z2dHM5ZkZMVDZ3ZzYybFZscFRTOFhVZkRTMENnWUJrZ2ZLZUFiZUlPM243YTVWaHovc0gKMkM3TDBrMDZXY1lkWmdNZWY2bFo2R3pxYzJ3dmhHdVpveWU2SXRkS0cxeEs3STd6WStVSEVoRFhxeVpJNTRiaApLQUxEa3BGMTlEY1VWTXp2NEVycmZSdGdQcGhBcUtGdTNPQ0FjYlhuRkdsTXNsa3NkWXQ0MkVJOFRZTk83NHlKCjlLVmxCZnduRTJoK0NOdVNYamxEWFFLQmdIb2pMS3BZT1pMNEFudGk2eURWeVpPU0QyK2g1Y3J1N0htMEdaZlMKYjVyVnEvZXpvUHZxQVc2Z2U4bk40anJua1BFN20rYlorV1JiRnhKQlErNjMxZjdqSE1IN0JLU0xTT1JqSkZ3dwpYc3FUVDlVSlMxOE5BRmlNamlvbkFoM3g2OXc2cVByRHpKdEpQRjFGUGN6MnFmVXUzdlRoTHFZZWZzUHdaZjhHCldxVlZBb0dBQ1JxNzB2ZHo0cWNvSGZYbmFDWjVWODBsRTFMV3Z3Rkw1TGprTVFYa3pWNlgrV2pUT2tMYUl6Z2cKRk1BaXZlb2IyN1JPT3ZvYWJON0g0bFlBdlphcWh4RXNBWm5TMDg4SVZ2TGY1Q0hZVDhpUlN2ZmZoRzZ3aEQ2SQpjNTRqaVBiaEwvbHQ4V1FuRkc2ajgvMDhLV1lJUmlwWlpGbVYxMms2TTljNm5xTXNPQmM9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== kind: Secret metadata: annotations: cert-manager.io/alt-names: kubewarden-controller-webhook-service.kubewarden.svc,kubewarden-controller-webhook-service.kubewarden.svc.cluster.local cert-manager.io/certificate-name: kubewarden-controller-serving-cert cert-manager.io/common-name: "" cert-manager.io/ip-sans: "" cert-manager.io/issuer-group: "" cert-manager.io/issuer-kind: Issuer cert-manager.io/issuer-name: kubewarden-controller-selfsigned-issuer cert-manager.io/uri-sans: "" creationTimestamp: "2024-06-04T06:20:20Z" labels: controller.cert-manager.io/fao: "true" name: webhook-server-cert namespace: kubewarden resourceVersion: "82782623" $ kubectl get validatingwebhookconfigurations kubewarden-controller-validating-webhook-configuration -o yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations:5188 ~]$ cert-manager.io/inject-ca-from: kubewarden/kubewarden-controller-serving-cert meta.helm.sh/release-name: kubewarden-controller meta.helm.sh/release-namespace: kubewarden creationTimestamp: "2024-06-04T06:20:20Z" generation: 1 labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: kubewarden-controller app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: kubewarden-controller app.kubernetes.io/part-of: kubewarden app.kubernetes.io/version: v1.12.0 helm.sh/chart: kubewarden-controller-2.0.11 name: kubewarden-controller-validating-webhook-configuration resourceVersion: "82782592" uid: 33af28ca-a51c-4a65-8df6-c53add6773d5 webhooks: - admissionReviewVersions: - v1 - v1beta1 clientConfig: service: name: kubewarden-controller-webhook-service namespace: kubewarden path: /validate-policies-kubewarden-io-v1-clusteradmissionpolicy port: 443 failurePolicy: Fail matchPolicy: Equivalent name: vclusteradmissionpolicy.kb.io namespaceSelector: {} objectSelector: {} rules: - apiGroups: - policies.kubewarden.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - clusteradmissionpolicies scope: '*' sideEffects: None timeoutSeconds: 10 - admissionReviewVersions: - v1 - v1beta1 clientConfig: service: name: kubewarden-controller-webhook-service namespace: kubewarden path: /validate-policies-kubewarden-io-v1-admissionpolicy port: 443 failurePolicy: Fail matchPolicy: Equivalent name: vadmissionpolicy.kb.io namespaceSelector: {} objectSelector: {} rules: - apiGroups: - policies.kubewarden.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - admissionpolicies scope: '*' sideEffects: None timeoutSeconds: 10 - admissionReviewVersions: - v1 clientConfig: service: name: kubewarden-controller-webhook-service namespace: kubewarden path: /validate-policies-kubewarden-io-v1-policyserver port: 443 failurePolicy: Fail matchPolicy: Equivalent name: vpolicyserver.kb.io namespaceSelector: {} objectSelector: {} rules: - apiGroups: - policies.kubewarden.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - policyservers scope: '*' sideEffects: None timeoutSeconds: 10 $ kubectl get mutatingwebhookconfigurations kubewarden-controller-mutating-webhook-configuration -o yaml apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: cert-manager.io/inject-ca-from: kubewarden/kubewarden-controller-serving-cert meta.helm.sh/release-name: kubewarden-controller meta.helm.sh/release-namespace: kubewarden creationTimestamp: "2024-06-04T06:20:20Z" generation: 1 labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: kubewarden-controller app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: kubewarden-controller app.kubernetes.io/part-of: kubewarden app.kubernetes.io/version: v1.12.0 helm.sh/chart: kubewarden-controller-2.0.11 name: kubewarden-controller-mutating-webhook-configuration resourceVersion: "82782588" uid: 1739ef72-fffa-4b07-a4ae-9aee3bf77b54 webhooks: - admissionReviewVersions: - v1 - v1beta1 clientConfig: service: name: kubewarden-controller-webhook-service namespace: kubewarden path: /mutate-policies-kubewarden-io-v1-clusteradmissionpolicy port: 443 failurePolicy: Fail matchPolicy: Equivalent name: mclusteradmissionpolicy.kb.io namespaceSelector: {} objectSelector: {} reinvocationPolicy: Never rules: - apiGroups: - policies.kubewarden.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - clusteradmissionpolicies scope: '*' sideEffects: None timeoutSeconds: 10 - admissionReviewVersions: - v1 - v1beta1 clientConfig: service: name: kubewarden-controller-webhook-service namespace: kubewarden path: /mutate-policies-kubewarden-io-v1-policyserver port: 443 failurePolicy: Fail matchPolicy: Equivalent name: mpolicyserver.kb.io namespaceSelector: {} objectSelector: {} reinvocationPolicy: Never rules: - apiGroups: - policies.kubewarden.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - policyservers scope: '*' sideEffects: None timeoutSeconds: 10 - admissionReviewVersions: - v1 - v1beta1 clientConfig: service: name: kubewarden-controller-webhook-service namespace: kubewarden path: /mutate-policies-kubewarden-io-v1-admissionpolicy port: 443 failurePolicy: Fail matchPolicy: Equivalent name: madmissionpolicy.kb.io namespaceSelector: {} objectSelector: {} reinvocationPolicy: Never rules: - apiGroups: - policies.kubewarden.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - admissionpolicies scope: '*' sideEffects: None timeoutSeconds: 10 $ kubectl get pods -n kube-system -o yaml kube-apiserver- apiVersion: v1 kind: Pod metadata: annotations: kubernetes.io/config.hash: 92a3ef86b8caca706e3f8ed2c2c59194 kubernetes.io/config.mirror: 92a3ef86b8caca706e3f8ed2c2c59194 kubernetes.io/config.seen: "2024-06-03T10:51:08.189371284+02:00" kubernetes.io/config.source: file creationTimestamp: "2024-06-03T08:51:30Z" labels: component: kube-apiserver tier: control-plane name: kube-apiserver- namespace: kube-system ownerReferences: - apiVersion: v1 controller: true kind: Node name: uid: 82fa0c75-af33-42fd-b563-cde9a0723dfb resourceVersion: "82845684" uid: a1558a9c-1a6f-4bfb-be86-9431f39e9a95 spec: containers: - args: - --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml - --allow-privileged=true - --anonymous-auth=false - --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 - --authorization-mode=Node,RBAC - --bind-address=0.0.0.0 - --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs - --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt - --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml - --enable-admission-plugins=NodeRestriction - --enable-aggregator-routing=true - --enable-bootstrap-token-auth=true - --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json - --encryption-provider-config-automatic-reload=true - --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt - --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt - --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key - --etcd-servers=https://127.0.0.1:2379 - --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt - --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt - --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --profiling=false - --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt - --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key - --requestheader-allowed-names=system:auth-proxy - --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --secure-port=6443 - --service-account-issuer=https://kubernetes.default.svc.cluster.local - --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key - --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key - --service-cluster-ip-range=10.43.0.0/16 - --service-node-port-range=30000-32767 - --storage-backend=etcd3 - --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt - --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 - --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key command: - kube-apiserver env: - name: FILE_HASH value: c85a256c529016125f92131ec1dd3b9d05726a92f108f63b14c5778eb2233042 - name: NO_PROXY value: .svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 image: index.docker.io/rancher/hardened-kubernetes:v1.28.10-rke2r1-build20240514 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - kubectl - get - --server=https://localhost:6443/ - --client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt - --client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key - --certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt - --raw=/livez failureThreshold: 8 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 15 name: kube-apiserver readinessProbe: exec: command: - kubectl - get - --server=https://localhost:6443/ - --client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt - --client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key - --certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt - --raw=/readyz failureThreshold: 3 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 15 resources: requests: cpu: 250m memory: 1Gi securityContext: privileged: false startupProbe: exec: command: - kubectl - get - --server=https://localhost:6443/ - --client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt - --client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key - --certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt - --raw=/livez failureThreshold: 24 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/ssl/certs name: dir0 - mountPath: /etc/ca-certificates name: dir1 - mountPath: /usr/local/share/ca-certificates name: dir2 - mountPath: /usr/share/ca-certificates name: dir3 - mountPath: /var/lib/rancher/rke2/server/cred name: dir4 - mountPath: /var/lib/rancher/rke2/server/db/etcd/name name: file0 readOnly: true - mountPath: /etc/rancher/rke2/rke2-pss.yaml name: file1 readOnly: true - mountPath: /var/lib/rancher/rke2/server/etc/egress-selector-config.yaml name: file2 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/client-auth-proxy.crt name: file3 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/client-auth-proxy.key name: file4 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/client-ca.crt name: file5 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt name: file6 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/client-kube-apiserver.key name: file7 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/etcd/client.crt name: file8 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/etcd/client.key name: file9 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/etcd/server-ca.crt name: file10 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/request-header-ca.crt name: file11 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/server-ca.crt name: file12 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/service.current.key name: file13 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/service.key name: file14 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt name: file15 readOnly: true - mountPath: /var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key name: file16 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute operator: Exists volumes: - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: dir0 - hostPath: path: /etc/ca-certificates type: DirectoryOrCreate name: dir1 - hostPath: path: /usr/local/share/ca-certificates type: DirectoryOrCreate name: dir2 - hostPath: path: /usr/share/ca-certificates type: DirectoryOrCreate name: dir3 - hostPath: path: /var/lib/rancher/rke2/server/cred type: DirectoryOrCreate name: dir4 - hostPath: path: /var/lib/rancher/rke2/server/db/etcd/name type: File name: file0 - hostPath: path: /etc/rancher/rke2/rke2-pss.yaml type: File name: file1 - hostPath: path: /var/lib/rancher/rke2/server/etc/egress-selector-config.yaml type: File name: file2 - hostPath: path: /var/lib/rancher/rke2/server/tls/client-auth-proxy.crt type: File name: file3 - hostPath: path: /var/lib/rancher/rke2/server/tls/client-auth-proxy.key type: File name: file4 - hostPath: path: /var/lib/rancher/rke2/server/tls/client-ca.crt type: File name: file5 - hostPath: path: /var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt type: File name: file6 - hostPath: path: /var/lib/rancher/rke2/server/tls/client-kube-apiserver.key type: File name: file7 - hostPath: path: /var/lib/rancher/rke2/server/tls/etcd/client.crt type: File name: file8 - hostPath: path: /var/lib/rancher/rke2/server/tls/etcd/client.key type: File name: file9 - hostPath: path: /var/lib/rancher/rke2/server/tls/etcd/server-ca.crt type: File name: file10 - hostPath: path: /var/lib/rancher/rke2/server/tls/request-header-ca.crt type: File name: file11 - hostPath: path: /var/lib/rancher/rke2/server/tls/server-ca.crt type: File name: file12 - hostPath: path: /var/lib/rancher/rke2/server/tls/service.current.key type: File name: file13 - hostPath: path: /var/lib/rancher/rke2/server/tls/service.key type: File name: file14 - hostPath: path: /var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt type: File name: file15 - hostPath: path: /var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key type: File name: file16 status: conditions: - lastProbeTime: null lastTransitionTime: "2024-06-03T08:51:50Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2024-06-04T08:14:04Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2024-06-04T08:14:04Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2024-06-03T08:51:50Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://f2f2f04e4409b9dfa5533ff380e72f8d969f25461145f5ffc26f6bc5f19fab7a image: docker.io/rancher/hardened-kubernetes:v1.28.10-rke2r1-build20240514 imageID: docker.io/rancher/hardened-kubernetes@sha256:cea191733dec4c40a4c922c863dcf0811efb37e0167f1cccdeec26c9fc799f13 lastState: {} name: kube-apiserver ready: true restartCount: 0 started: true state: running: startedAt: "2024-06-03T08:51:20Z" hostIP: 10.0.4.79 phase: Running podIP: 10.0.4.79 podIPs: - ip: 10.0.4.79 qosClass: Burstable startTime: "2024-06-03T08:51:50Z" ````
mueller-ma commented 1 month ago

Logs from the kube-apiserver-... pod when I try to install kubewarden-defaults again:

W0605 07:53:49.258539       1 dispatcher.go:225] Failed calling webhook, failing closed mpolicyserver.kb.io: failed calling webhook "mpolicyserver.kb.io": failed to call webhook: Post "https://kubewarden-controller-webhook-service.kubewarden.svc:443/mutate-policies-kubewarden-io-v1-policyserver?timeout=10s": tls: failed to verify certificate: x509: certificate signed by unknown authority

and from pods/kubewarden-controller-....:

2024/06/05 07:57:54 http: TLS handshake error from 10.42.1.0:43924: remote error: tls: bad certificate

In both cases it's only one line without much information :/

jvanz commented 1 month ago

Thanks @mueller-ma !

Hmm, interesting... it seems that cert-manager is not injecting the caBundle in your webhooks configuration. In the your webhooks, there is the annotation cert-manager.io/inject-ca-from: kubewarden/kubewarden-controller-serving-cert but the caBundle is missing. Take a look in a example from my cluster:

apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
  annotations:
    cert-manager.io/inject-ca-from: kubewarden/kubewarden-controller-serving-cert
    meta.helm.sh/release-name: kubewarden-controller
    meta.helm.sh/release-namespace: kubewarden
  creationTimestamp: "2024-06-04T21:29:10Z"
  generation: 2
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: kubewarden-controller
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kubewarden-controller
    app.kubernetes.io/part-of: kubewarden
    app.kubernetes.io/version: v1.12.0
    helm.sh/chart: kubewarden-controller-2.0.11
  name: kubewarden-controller-mutating-webhook-configuration
  resourceVersion: "134591"
  uid: d621ef1a-c0c3-4a7c-a492-a909a3f7ae21
webhooks:
[...]
  - admissionReviewVersions:
    - v1
    - v1beta1
    clientConfig:
      caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUROakNDQWg2Z0F3SUJBZ0lSQUlzcWNHbGI5cW9xT3A0V3cyaTZWWW93RFFZSktvWklodmNOQVFFTEJRQXcKQURBZUZ3MHlOREEyTURNeE9UUXlNek5hRncweU5EQTVNREV4T1RReU16TmFNQUF3Z2dFaU1BMEdDU3FHU0liMwpEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNydmdZSTc4bnMzWE1kclc1bU5mSldReEpMKzRlcnlMcUNISm5mCjBUNHVpVk9nNVdQZnpHWEZWWGxBcUZDUytPM0ZZR3NYSVAwN3JRM0pWMlJLaGE1dTVHT0hzN1lEY3ZoajVLVGwKbmE3WGo0YjFLakdWRlUzZzY1VGJZV1dWek50QkxTR29vSDV0UTVYVDdoNFUwZzR3VmV3RldCRFNUMlhNa0NLaApDYVV0UzRYVWtoTWhCbUNyQ0I0K3lXcngyckN3bGc5SE40UjdZekdOK3ZFTnVsaXUwcEdJSHdKUUdDUVM0R1lOCjl6VVN5VDBHSVVXb2cxWU1GTk13NTZGVHp1S3JlMUg4dkVma1ZzcTJTU2tNWEs0WDlHOU1ub0JQVHlqelRHeloKV2UxdDFRQ1lvR2Uvc0FKKzNaWmZxR2x5eDIzRjNxY1FibnJ6bHRHQWQ2RkxjWUhYQWdNQkFBR2pnYW93Z2FjdwpEZ1lEVlIwUEFRSC9CQVFEQWdXZ01Bd0dBMVVkRXdFQi93UUNNQUF3Z1lZR0ExVWRFUUVCL3dSOE1IcUNOR3QxClltVjNZWEprWlc0dFkyOXVkSEp2Ykd4bGNpMTNaV0pvYjI5ckxYTmxjblpwWTJVdWEzVmlaWGRoY21SbGJpNXoKZG1PQ1FtdDFZbVYzWVhKa1pXNHRZMjl1ZEhKdmJHeGxjaTEzWldKb2IyOXJMWE5sY25acFkyVXVhM1ZpWlhkaApjbVJsYmk1emRtTXVZMngxYzNSbGNpNXNiMk5oYkRBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQVVVdjMzYTNwCjM0ZkxUZzB2L05lS1l2RUxhL2hjM0xlVkdZZG1qTjRmcnlnb2p0SVRrYlJnOS8rSlFrbUl0Tmh3UXBDOEpKbmUKSkpkRTU0RmZqampuWFU5bThwWHRDNUhsY1kxNU1LYVhScG51bHNVdm5VSmRBdG9ManFTUjVZUjZ1ZDZDaGhsbQowYlJzUG9nVXlGRTdZV3hRUEh3WjV4RENsTGZ0cEoxUTY2VEIyaUwrcHg2akR3Yi9yeHBYeWI4aHFIRmNpM1huCm5ZZlJkVnR4S1pqMEVCaXpyc3E4Q1g1a2IzK0toYVFmMnlDd3NIdVdqSXZPUDVVek9PSFVlcURneERHRzhNYkEKa0E2VTEzLzFyanE0ck9aTm8yTUkyUWM0bjNNdnNoY052TTVoWHF0aG1YMmdMcHROT3FNZDVKVldXcU4ycEpHdgp4K1FURHJRREx1NzdzZz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
      service:
        name: kubewarden-controller-webhook-service
        namespace: kubewarden
        path: /mutate-policies-kubewarden-io-v1-policyserver
        port: 443

Note that the certificate should be in the kubewarden namespace. But in the issue description I cannot see the namespace of the certificate, can you double check that?

Another question, are the cainjector enable in your cert-manager installation? This could be disable in the helm chart installation. Maybe, you can share the values used in the cert-manager installation. Therefore, I can replicate it here. I'm considering that you are installing the cert-manager using Helm commands, am I right? Or are you using the Helm CRDs available in the RKE2?

If you have the cainjector running in the cluster, can you see any error in its logs ? Maybe share the logs here as well.

mueller-ma commented 1 month ago

I found the issue: I applied the best practices for cert-manager which includes: https://cert-manager.io/docs/installation/best-practice/#memory

cainjector:
  extraArgs:
  - --namespace=cert-manager
  - --enable-certificates-data-source=false

Remove the extraArgs fixed the missing caBundle. The installation works now, thank you for your help :)

jvanz commented 1 month ago

Great! I'm glad to know that I help you discovery the issue! 🥳

By the way, if you are using Kubewarden in production, consider add your organization in the adopters file. :)