jetstack / kube-lego

DEPRECATED: Automatically request certificates for Kubernetes Ingress resources from Let's Encrypt
Apache License 2.0
2.16k stars 267 forks source link

read udp i/o timeout #333

Open ticruz38 opened 6 years ago

ticruz38 commented 6 years ago

It seems kube lego can't get the certificates, here what's in the log

level=error msg="worker: error processing item, requeuing after rate limit: Get https://acme-v01.api.letsencrypt.org/directory: dial tcp: lookup acme-v01.api.letsencrypt.org on 10.96.0.10:53: read udp 192.168.2.67:51435->10.96.0.10:53: i/o timeout" context=kubelego

Does anyone have an idea what could be misconfigured here? All my pods are running correctly.

jar3b commented 6 years ago

@ticruz38, hello, I have similar problem with cert-manager. If the problem was solved, can you explain the solution please?

cguethle commented 6 years ago

It is attempting to talk to root domain servers to do DNS based verification for the certificate. If you have anything in the way of talking UDP you will encounter this problem. For us, we have corporate proxies for all outbound traffic that blocks UDP. Our solution was non-technical: we purchased a wildcard cert instead of using LE as it gives us a year vs manually registered LE cert of 90 days. Just easier. Ultimately, you need to ensure UDP traffic can traverse your network and get to acme-v01.api.letsencrypt.org. Hope this helps.

jar3b commented 6 years ago

@cguethle thanks for advice! In my case problem was on Kubernetes (or Docker, or network config... idk) - UDP requests were too slow or there was no answer from NS server at all (I didn't understand, was the answer or not, too lazy to check this. for 10 seconds just was no response)

I solved this by modifying https://github.com/jetstack/cert-manager (i use it) code to failback to TCP if UDP reaches timeout. After all, wildcard LE issuance works well with TCP only.

ticruz38 commented 6 years ago

I ran the kubernetes cluster with kubeadm on Scaleway provider, this was a problem with the network settings, the master nodes could'nt talk to slaves via ssl, I had to override a kubelet variables but can't remember which one...