Closed rcjames closed 4 years ago
@rcjames Thanks for the detailed report.
in edge 20.9.2
, We've added disovery to all outbound connections and here its trying to discover the x:443
address of the k8s api and failing, This case is similar to that of the Linkerd Control Plane pods as they need access to the K8s API. We skip 443
from being discovered for the control-plane components and you should probably do the same by adding the config.linkerd.io/skip-outbound-ports: 443
annotation to the pod spec as per https://linkerd.io/2/reference/proxy-configuration/
Can you try doing this, and reply back with your findings?
@Pothulapati - Thank you for your swift response. Adding config.linkerd.io/skip-outbound-ports: "443"
(with quotes :facepalm:) has solved this problem. Thank you very much for your help!
Bug Report
What is the issue?
Using
kubectl
to provision a secret from within the cluster, the pod stops being able to access the Kubernetes API after a few minutes. The underlying host is still able to contact the master and running the pod without thelinkerd-proxy
sidecar makes the issue disappear.How can it be reproduced?
Create a new AWS EKS cluster, v1.16.
Install kube2iam and cert-manager. Follow the guide on automatically rotating control plane TLS credentials here - https://linkerd.io/2/tasks/automatically-rotating-control-plane-tls-credentials/
Install Linkerd Edge 20.9.2 via Helm Chart:
Create a Namespace, Service Account, Role and Deployment for provisioning a secret
Tail the logs and after 6-10 minutes the pod should start to become unable to contact the Kubernetes API (excerpt below)
Logs, error output, etc
Some snippets of logs from
linkerd-proxy
where I think problems might be occuring,but I'm not too sure.A larger snippet with all debug logs from a time when requests were successful and then when they start failing. https://gist.github.com/rcjames/d8d7d95501d05348828ae9e4131d36bf
linkerd check
outputEnvironment