linkerd / linkerd2

Ultralight, security-first service mesh for Kubernetes. Main repo for Linkerd 2.x.
https://linkerd.io
Apache License 2.0
10.72k stars 1.28k forks source link

Linkerd-proxy routing traffic to wrong pod #12941

Open bc185174 opened 3 months ago

bc185174 commented 3 months ago

What is the issue?

We are occasionally seeing that traffic from our applications are being routed to the wrong pod. This was noticed when we started getting 403 responses from linkerd-proxy due to policy rejection, even though the policies were correctly configured.

After enabling the debug logs, we noticed that the proxy was routing traffic to a different IP than that of the pod the application was trying to resolve to.

From the proxy logs output below, the resolved IP is 100.127.166.1; however, when querying the destination pod using the linkerd CLI, the IP we expect to call to is 100.127.166.50.

Proxy logs:

{"timestamp":"[ 34142.675035s]","level":"DEBUG","fields":{"message":"Remote proxy error","error":"client 100.127.166.59:34612: server: 100.127.166.1:5984: unauthorized request on route"},"target":"linkerd_app_outbound::http::handle_proxy_error_headers","spans":[{"addr":"100.127.166.1:5984","name":"forward"}],"threadId":"ThreadId(1)"}

Linkerd CLI output:

linkerd diagnostics endpoints api-service-0.api-service.api-service.svc.cluster.local:8080 --kubeconfig=/etc/kubernetes/zylevel0.conf --linkerd-namespace=linkerd --destination-pod linkerd-destination-lqv74
NAMESPACE           IP               PORT   POD                   SERVICE
api-service         100.127.166.50   8080   api-service-0         api-service.api-service

This issue is resolved when we stop/start the linkerd-proxy container using crictl CLI. Note we did not restart application container. Is there anything else we can check/help to debug?

How can it be reproduced?

  1. Deploy linkerd using Linkerd CLI as per the docs https://linkerd.io/2.15/getting-started/#step-1-install-the-cli.
  2. Deploy basic HTTP application and apply the linkerd policies.
  3. Reboot the Kubernetes node where application is running.

Logs, error output, etc

{"timestamp":"[ 34142.675035s]","level":"DEBUG","fields":{"message":"Remote proxy error","error":"client 100.127.166.59:34612: server: 100.127.166.1:5984: unauthorized request on route"},"target":"linkerd_app_outbound::http::handle_proxy_error_headers","spans":[{"addr":"100.127.166.1:5984","name":"forward"}],"threadId":"ThreadId(1)"}

output of linkerd check -o short

N/A

Environment

Possible solution

N/A

Additional context

No response

Would you like to work on fixing this bug?

maybe

wmorgan commented 3 months ago

We will need version of Linkerd control plane (and data plane, if different)

bc185174 commented 3 months ago

We will need version of Linkerd control plane (and data plane, if different)

Linkerd 2.14.10 for both control plane and data plane.

bc185174 commented 3 months ago

Something to note is our application sends out a HEAD request every 5s as keep-alive. How does this work with the destination caching? AFAIK, this caches TTL is also 5s. Could this cause issues?

kflynn commented 3 months ago

@bc185174, there have been a number of changes around destination selection after 2.14.10 -- does the latest edge release show this failure for you?

bc185174 commented 3 months ago

@bc185174, there have been a number of changes around destination selection after 2.14.10 -- does the latest edge release show this failure for you?

We've just tried edge-24.2.4 and still hitting the same issue. It seems reproducible with our builds and restarting the client pod resolves the issue.

DavidMcLaughlin commented 3 months ago

A couple of clarifying questions:

stale[bot] commented 1 week ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.