Open Voldemat opened 1 year ago
I've faced the same issue on k3s cluster running inside multipass VM (Ubuntu 22.04)
I was able to fix it by editing secrets-reader
ClusterRole
resource in rbac.yaml
like this:
So I've just added here flowcontrol.apiserver.k8s.io
item inside apiGroups
I'm not sure if it is supposed to work like this, so I prefer to consider it as a temporary workaround and it would be cool if someone could explain this incident
Thank you for your advice. After editing this cluster role, error logs from pod was gone. But problem with creating resource regru-dns still remains.
I0221 15:18:09.976204 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0221 15:18:09.976345 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0221 15:18:09.976209 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0221 15:18:09.976397 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0221 15:18:09.976580 1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/tls/tls.crt::/tls/tls.key"
I0221 15:18:09.976611 1 secure_serving.go:266] Serving securely on [::]:443
I0221 15:18:09.976654 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0221 15:18:09.976253 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0221 15:18:09.976690 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0221 15:18:09.976727 1 main.go:86] call function Initialize
I0221 15:18:09.977160 1 apf_controller.go:317] Starting API Priority and Fairness config controller
I0221 15:18:10.077433 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0221 15:18:10.077479 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0221 15:18:10.077453 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
I0221 15:18:10.077609 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I can't see any error logs here, also I'm not sure about what you meant by regru-dns resource
, I don't remember any resource with that name, tbh
Personally I've faced some errors after editing rbac rules as well, I've had some errors like this:
Failed to watch *v1beta3.PriorityLevelConfiguration: failed to list *v1beta3.PriorityLevelConfiguration: the server could not find the requested resource
But this errors didn't affect anything, my certificate was successfully created after some time (Also, perhaps, these errors may be caused by k3s distribution in my case, as I'm not using "vanilla k8s")
Also I may advice you to check out the spec.acme.server
field in ClusterIssuer
resource you're creating. Personally I was using staging url for tests (https://acme-v02-staging.api.letsencrypt.org/directory
) and with that url your ACME challenge won't complete. You should try it on production url (https://acme-v02.api.letsencrypt.org/directory
) if you want to see your flow fully completed
I think my error may be related to cluster issuer solverName, what solverName did you set?
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: qk-issuer
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: vladimirdev635@gmail.com
privateKeySecretRef:
name: quickclick.online.cert
solvers:
- http01:
ingress:
class: nginx
- dns01:
webhook:
config:
regruPasswordSecretRef:
name: regru-password
key: REGRU_PASSWORD
solverName: regru-dns
groupName: acme.regru.ru
Same as you did:
But I can see that we have different values for privateKeySecretRef
@Voldemat, hello! What is quickclick.online.cert? You need set value cert-manager-letsencrypt-private-key
The following RBAC configuration resolved this permission issues.
# I have found the same problem in cert-manager issuer
# https://github.com/vadimkim/cert-manager-webhook-hetzner/pull/37/files
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: regru-webhook-regru-cluster-issuer:flowcontrol-solver
labels:
app: regru-cluster-issuer
rules:
- apiGroups:
- "flowcontrol.apiserver.k8s.io"
resources:
- 'prioritylevelconfigurations'
- 'flowschemas'
verbs:
- 'list'
- 'watch'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: regru-webhook-regru-cluster-issuer:flowcontrol-solver
labels:
app: regru-cluster-issuer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: regru-webhook-regru-cluster-issuer:flowcontrol-solver
subjects:
- apiGroup: ""
kind: ServiceAccount
name: regru-webhook-regru-cluster-issuer
namespace: cert-manager
Cluster was obtained using Yandex.Cloud Managed Kubernetes solution. Any modifications of RBAC roles didn't work.
kubectl get challenge letsencrypt-jvzb2-2152256332-2670382356 -o yaml
Chunk of web hook pod logs: