linkerd / linkerd2

Ultralight, security-first service mesh for Kubernetes. Main repo for Linkerd 2.x.
https://linkerd.io
Apache License 2.0
10.62k stars 1.27k forks source link

linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)] #10994

Closed Quincy475 closed 8 months ago

Quincy475 commented 1 year ago

What is the issue?

I am seeing warnings below in linkerd destination pod & in linkerd-proxy container of all other meshed pods. Please let me know if any logs are required for troubleshooting.

How can it be reproduced?

installing in OKE in new 2.12.4 linkerd version

Logs, error output, etc

Defaulted container "linkerd-proxy" out of: linkerd-proxy, destination, sp-validator, policy, linkerd-network-validator (init)
[     0.002465s]  INFO ThreadId(01) linkerd2_proxy::rt: Using single-threaded proxy runtime
[     0.003121s]  INFO ThreadId(01) linkerd2_proxy: Admin interface on 0.0.0.0:4191
[     0.003129s]  INFO ThreadId(01) linkerd2_proxy: Inbound interface on 0.0.0.0:4143
[     0.003132s]  INFO ThreadId(01) linkerd2_proxy: Outbound interface on 127.0.0.1:4140
[     0.003134s]  INFO ThreadId(01) linkerd2_proxy: Tap interface on 0.0.0.0:4190
[     0.003137s]  INFO ThreadId(01) linkerd2_proxy: Local identity is linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
[     0.003139s]  INFO ThreadId(01) linkerd2_proxy: Identity verified via linkerd-identity-headless.linkerd.svc.cluster.local:8080 (linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local)
[     0.003141s]  INFO ThreadId(01) linkerd2_proxy: Destinations resolved via localhost:8086
[     0.003507s]  WARN ThreadId(01) watch{port=9997}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     0.120035s]  WARN ThreadId(01) watch{port=9997}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     0.333315s]  WARN ThreadId(01) watch{port=9997}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     0.531661s]  INFO ThreadId(02) daemon:identity: linkerd_app: Certified identity id=linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
[     0.756526s]  WARN ThreadId(01) watch{port=9997}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     1.258188s]  WARN ThreadId(01) watch{port=9997}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     1.759793s]  WARN ThreadId(01) watch{port=9997}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     2.270089s]  WARN ThreadId(01) watch{port=9997}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     2.770872s]  WARN ThreadId(01) watch{port=9997}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     3.272715s]  WARN ThreadId(01) watch{port=9997}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     3.774759s]  WARN ThreadId(01) watch{port=9997}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     4.276313s]  WARN ThreadId(01) watch{port=9997}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]

output of linkerd check -o short

$ linkerd check
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version

linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ control plane pods are ready
√ cluster networks contains all node podCIDRs
√ cluster networks contains all pods
√ cluster networks contains all services

linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ proxy-init container runs as root user if docker container runtime is used

linkerd-cni-plugin
------------------
√ cni plugin ConfigMap exists
√ cni plugin ClusterRole exists
√ cni plugin ClusterRoleBinding exists
√ cni plugin ServiceAccount exists
√ cni plugin DaemonSet exists
√ cni plugin pod is running on all nodes

linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor

linkerd-webhooks-and-apisvc-tls
-------------------------------
√ proxy-injector webhook has valid cert
√ proxy-injector cert is valid for at least 60 days
√ sp-validator webhook has valid cert
√ sp-validator cert is valid for at least 60 days
√ policy-validator webhook has valid cert
√ policy-validator cert is valid for at least 60 days

linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date

control-plane-version
---------------------
√ can retrieve the control plane version
√ control plane is up-to-date
√ control plane and cli versions match

linkerd-control-plane-proxy
---------------------------
√ control plane proxies are healthy
√ control plane proxies are up-to-date
√ control plane proxies and cli versions match

linkerd
-------
× Running: C:\Users\QUINLAND\.linkerd2\bin\linkerd viz check --as-group=[] --linkerd-namespace=linkerd --verbose=false --proxy=false --wait=5m0s --output=json
    invalid extension check output from "C:\Users\QUINLAND\.linkerd2\bin\linkerd viz check --as-group=[] --linkerd-namespace=linkerd --verbose=false --proxy=false --wait=5m0s --output=json" (JSON object expected):

[unexpected end of JSON input]
    see https://linkerd.io/2.13/checks/#extensions for hints

Status check results are ×
$ linkerd viz check
linkerd-viz
-----------
√ linkerd-viz Namespace exists
√ can initialize the client
√ linkerd-viz ClusterRoles exist
√ linkerd-viz ClusterRoleBindings exist
√ tap API server has valid cert
√ tap API server cert is valid for at least 60 days
√ tap API service is running
√ linkerd-viz pods are injected
√ viz extension pods are running
√ viz extension proxies are healthy
√ viz extension proxies are up-to-date
√ viz extension proxies and cli versions match
√ prometheus is installed and configured correctly
√ viz extension self-check

Status check results are √

Environment

Possible solution

No response

Additional context

No response

Would you like to work on fixing this bug?

None

chris-ng-scmp commented 1 year ago

I am not sure if I have the same issue with you I am trying to install Linkerd 2.13.4 via HELM in a K8S 1.24 with the same error under deployment linkerd-destination:

{"timestamp":"[  1219.919008s]","level":"WARN","fields":{"message":"Failed to connect","error":"endpoint 127.0.0.1:8090: Connection refused (os error 111)"},"target":"linkerd_reconnect","spans":[{"port":9990,"name":"watch"},{"addr":"localhost:8090","name":"controller"},{"addr":"127.0.0.1:8090","name":"endpoint"}],"threadId":"ThreadId(1)"}

And I found the root cause is the policy container can't be started due to an SSL error to the in-cluster K8S API service endpoint

{"timestamp":"2023-06-06T08:43:57.061058Z","level":"TRACE","fields":{"message":"checkout waiting for idle connection: (\"https\", 10.200.128.1)"},"target":"hyper::client::pool","spans":[{"http.method":"GET","http.url":"https://10.200.128.1/apis/apps/v1/namespaces/linkerd/deployments/linkerd-destination","otel.kind":"client","otel.name":"get","name":"HTTP"}]}
{"timestamp":"2023-06-06T08:43:57.061102Z","level":"TRACE","fields":{"message":"Http::connect; scheme=Some(\"https\"), host=Some(\"10.200.128.1\"), port=None"},"target":"hyper::client::connect::http","spans":[{"http.method":"GET","http.url":"https://10.200.128.1/apis/apps/v1/namespaces/linkerd/deployments/linkerd-destination","otel.kind":"client","otel.name":"get","name":"HTTP"}]}
{"timestamp":"2023-06-06T08:43:57.061127Z","level":"DEBUG","fields":{"message":"connecting to 10.200.128.1:443"},"target":"hyper::client::connect::http","spans":[{"http.method":"GET","http.url":"https://10.200.128.1/apis/apps/v1/namespaces/linkerd/deployments/linkerd-destination","otel.kind":"client","otel.name":"get","name":"HTTP"}]}
{"timestamp":"2023-06-06T08:43:57.061226Z","level":"TRACE","fields":{"message":"registering event source with poller: token=Token(2), interests=READABLE | WRITABLE","log.target":"mio::poll","log.module_path":"mio::poll","log.file":"/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.6/src/poll.rs","log.line":532},"target":"mio::poll","spans":[{"http.method":"GET","http.url":"https://10.200.128.1/apis/apps/v1/namespaces/linkerd/deployments/linkerd-destination","otel.kind":"client","otel.name":"get","name":"HTTP"}]}
{"timestamp":"2023-06-06T08:43:57.063974Z","level":"DEBUG","fields":{"message":"connected to 10.200.128.1:443"},"target":"hyper::client::connect::http","spans":[{"http.method":"GET","http.url":"https://10.200.128.1/apis/apps/v1/namespaces/linkerd/deployments/linkerd-destination","otel.kind":"client","otel.name":"get","name":"HTTP"}]}
{"timestamp":"2023-06-06T08:43:57.068350Z","level":"TRACE","fields":{"message":"deregistering event source from poller","log.target":"mio::poll","log.module_path":"mio::poll","log.file":"/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.6/src/poll.rs","log.line":663},"target":"mio::poll","spans":[{"http.method":"GET","http.url":"https://10.200.128.1/apis/apps/v1/namespaces/linkerd/deployments/linkerd-destination","otel.kind":"client","otel.name":"get","name":"HTTP"}]}
{"timestamp":"2023-06-06T08:43:57.068459Z","level":"TRACE","fields":{"message":"checkout dropped for (\"https\", 10.200.128.1)"},"target":"hyper::client::pool","spans":[{"http.method":"GET","http.url":"https://10.200.128.1/apis/apps/v1/namespaces/linkerd/deployments/linkerd-destination","otel.kind":"client","otel.name":"get","name":"HTTP"}]}
{"timestamp":"2023-06-06T08:43:57.068517Z","level":"ERROR","fields":{"message":"failed with error error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1921:: unable to get issuer certificate"},"target":"kube_client::client::builder","spans":[{"http.method":"GET","http.url":"https://10.200.128.1/apis/apps/v1/namespaces/linkerd/deployments/linkerd-destination","otel.kind":"client","otel.name":"get","otel.status_code":"ERROR","name":"HTTP"}]}
{"timestamp":"2023-06-06T08:43:57.068564Z","level":"TRACE","fields":{"message":"deregistering event source from poller","log.target":"mio::poll","log.module_path":"mio::poll","log.file":"/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.6/src/poll.rs","log.line":663},"target":"mio::poll"}
{"timestamp":"2023-06-06T08:43:57.068598Z","level":"TRACE","fields":{"message":"deregistering event source from poller","log.target":"mio::poll","log.module_path":"mio::poll","log.file":"/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.6/src/poll.rs","log.line":663},"target":"mio::poll"}
{"timestamp":"2023-06-06T08:43:57.068615Z","level":"TRACE","fields":{"message":"worker polling for next message"},"target":"tower::buffer::worker"}
{"timestamp":"2023-06-06T08:43:57.068627Z","level":"TRACE","fields":{"message":"buffer already closed"},"target":"tower::buffer::worker"}
Error: HyperError: error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1921:: unable to get issuer certificate

No idea why have cert error with K8S API server...

Quincy475 commented 1 year ago

I think these are different issues. the policy container in my cluster doesn't give an error.

Quincy475 commented 1 year ago

I've looked a bit further into this problem & I've also tried reinstalling linkerD on my cluster but unfortunaltly was unable to solve it. However, I think there might be something wrong with the proxy-injector.

Logs from proxy-injector

time="2023-06-16T12:42:26Z" level=info msg="running version stable-2.13.4"
time="2023-06-16T12:42:26Z" level=info msg="starting admin server on :9995"
time="2023-06-16T12:42:26Z" level=info msg="waiting for caches to sync"
time="2023-06-16T12:42:26Z" level=info msg="listening at :8443"
time="2023-06-16T12:42:26Z" level=info msg="caches synced"
2023/06/16 12:42:51 http: TLS handshake error from 10.244.0.172:59658: EOF

these are the debug logs from the proxy

[     0.002139s] DEBUG ThreadId(01) linkerd_app::env: Only allowing connections targeting `LINKERD2_PROXY_INBOUND_IPS` allowed={10.244.1.35}
[     0.002545s]  INFO ThreadId(01) linkerd2_proxy::rt: Using single-threaded proxy runtime
[     0.002933s] DEBUG ThreadId(01) linkerd_app: building app
[     0.003292s] DEBUG ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: linkerd_proxy_balance::discover::buffer: Spawning discovery buffer capacity=1000
[     0.003338s] DEBUG ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: linkerd_proxy_balance::discover::buffer: Spawning discovery buffer capacity=1000
[     0.003658s]  INFO ThreadId(01) linkerd2_proxy: Admin interface on 0.0.0.0:4191
[     0.003664s]  INFO ThreadId(01) linkerd2_proxy: Inbound interface on 0.0.0.0:4143
[     0.003667s]  INFO ThreadId(01) linkerd2_proxy: Outbound interface on 127.0.0.1:4140
[     0.003669s]  INFO ThreadId(01) linkerd2_proxy: Tap DISABLED
[     0.003672s]  INFO ThreadId(01) linkerd2_proxy: Local identity is linkerd-proxy-injector.linkerd.serviceaccount.identity.linkerd.cluster.local
[     0.003674s]  INFO ThreadId(01) linkerd2_proxy: Identity verified via linkerd-identity-headless.linkerd.svc.cluster.local:8080 (linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local)
[     0.003676s]  INFO ThreadId(01) linkerd2_proxy: Destinations resolved via linkerd-dst-headless.linkerd.svc.cluster.local:8086 (linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local)
[     0.003679s] DEBUG ThreadId(01) linkerd_app: spawning daemon thread
[     0.003768s] DEBUG ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: linkerd_dns: Resolving a SRV record name=linkerd-dst-headless.linkerd.svc.cluster.local.
[     0.006274s] DEBUG ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: trust_dns_proto::xfer::dns_handle: querying: linkerd-dst-headless.linkerd.svc.cluster.local. SRV
[     0.006302s] DEBUG ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: trust_dns_resolver::name_server::name_server_pool: sending request: [Query { name: Name("linkerd-dst-headless.linkerd.svc.cluster.local."), query_type: SRV, query_class: IN }]
[     0.006352s] DEBUG ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: trust_dns_resolver::name_server::name_server: reconnecting: NameServerConfig { socket_addr: 10.96.5.5:53, protocol: Udp, tls_dns_name: None, trust_nx_responses: false, bind_addr: None }
[     0.006379s] DEBUG ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: trust_dns_proto::xfer: enqueueing message:QUERY:[Query { name: Name("linkerd-dst-headless.linkerd.svc.cluster.local."), query_type: SRV, query_class: IN }]
[     0.006401s] DEBUG ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: linkerd_dns: Resolving a SRV record name=linkerd-policy.linkerd.svc.cluster.local.
[     0.006412s] DEBUG ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: trust_dns_proto::xfer::dns_handle: querying: linkerd-policy.linkerd.svc.cluster.local. SRV
[     0.006419s] DEBUG ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: trust_dns_resolver::name_server::name_server_pool: sending request: [Query { name: Name("linkerd-policy.linkerd.svc.cluster.local."), query_type: SRV, query_class: IN }]
[     0.006425s] DEBUG ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: trust_dns_resolver::name_server::name_server: existing connection: NameServerConfig { socket_addr: 10.96.5.5:53, protocol: Udp, tls_dns_name: None, trust_nx_responses: false, bind_addr: None }
[     0.006429s] DEBUG ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: trust_dns_proto::xfer: enqueueing message:QUERY:[Query { name: Name("linkerd-policy.linkerd.svc.cluster.local."), query_type: SRV, query_class: IN }]
[     0.006509s] DEBUG ThreadId(01) trust_dns_proto::udp::udp_client_stream: final message: ; header 34571:QUERY:RD:NoError:QUERY:0/0/0
; query
;; linkerd-dst-headless.linkerd.svc.cluster.local. IN SRV

[     0.006517s] DEBUG ThreadId(01) trust_dns_proto::udp::udp_client_stream: final message: ; header 49351:QUERY:RD:NoError:QUERY:0/0/0
; query
;; linkerd-policy.linkerd.svc.cluster.local. IN SRV

[     0.006655s] DEBUG ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: trust_dns_proto::udp::udp_stream: created socket successfully
[     0.006678s] DEBUG ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: trust_dns_proto::udp::udp_stream: created socket successfully
[     0.006688s] DEBUG ThreadId(01) watch{port=4191}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: linkerd_stack::failfast: Service has become unavailable
[     0.006808s] DEBUG ThreadId(02) daemon: linkerd_app: running admin thread
[     0.006851s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_proxy_identity_client::certify: Identity daemon running
[     0.006855s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_proxy_identity_client::certify: Certifying identity
[     0.006894s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_proxy_balance::discover::buffer: Spawning discovery buffer capacity=1000
[     0.007045s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_dns: Resolving a SRV record name=linkerd-identity-headless.linkerd.svc.cluster.local.
[     0.007066s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: trust_dns_proto::xfer::dns_handle: querying: linkerd-identity-headless.linkerd.svc.cluster.local. SRV
[     0.007084s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: trust_dns_resolver::name_server::name_server_pool: sending request: [Query { name: Name("linkerd-identity-headless.linkerd.svc.cluster.local."), query_type: SRV, query_class: IN }]
[     0.007099s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: trust_dns_resolver::name_server::name_server: existing connection: NameServerConfig { socket_addr: 10.96.5.5:53, protocol: Udp, tls_dns_name: None, trust_nx_responses: false, bind_addr: None }
[     0.007103s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: trust_dns_proto::xfer: enqueueing message:QUERY:[Query { name: Name("linkerd-identity-headless.linkerd.svc.cluster.local."), query_type: SRV, query_class: IN }]
[     0.007125s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_stack::failfast: Service has become unavailable
[     0.007136s] DEBUG ThreadId(01) trust_dns_proto::udp::udp_client_stream: final message: ; header 24515:QUERY:RD:NoError:QUERY:0/0/0
; query
;; linkerd-identity-headless.linkerd.svc.cluster.local. IN SRV

[     0.007178s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: trust_dns_proto::udp::udp_stream: created socket successfully
[     0.007445s] DEBUG ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: trust_dns_proto::udp::udp_client_stream: received message id: 49351
[     0.007462s] DEBUG ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: trust_dns_resolver::error: Response:; header 49351:RESPONSE:RD,AA:NoError:QUERY:1/0/1
; query
;; linkerd-policy.linkerd.svc.cluster.local. IN SRV
; answers 1
linkerd-policy.linkerd.svc.cluster.local. 5 IN SRV 0 100 8090 10-244-0-171.linkerd-policy.linkerd.svc.cluster.local.
; nameservers 0
; additionals 1
10-244-0-171.linkerd-policy.linkerd.svc.cluster.local. 5 IN A 10.244.0.171

[     0.007475s] DEBUG ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: trust_dns_resolver::error: Response:; header 49351:RESPONSE:RD,AA:NoError:QUERY:1/0/1
; query
;; linkerd-policy.linkerd.svc.cluster.local. IN SRV
; answers 1
linkerd-policy.linkerd.svc.cluster.local. 5 IN SRV 0 100 8090 10-244-0-171.linkerd-policy.linkerd.svc.cluster.local.
; nameservers 0
; additionals 1
10-244-0-171.linkerd-policy.linkerd.svc.cluster.local. 5 IN A 10.244.0.171

[     0.007494s] DEBUG ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: linkerd_dns: ttl=4.999991834s addrs=[10.244.0.171:8090]
[     0.007506s] DEBUG ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: linkerd_proxy_dns_resolve: addrs=[10.244.0.171:8090] name=linkerd-policy.linkerd.svc.cluster.local:8090
[     0.007520s] DEBUG ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: linkerd_proxy_balance::discover::from_resolve: Changed change=Insert(10.244.0.171:8090, ())
[     0.007537s] DEBUG ThreadId(01) watch{port=4191}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}:endpoint{addr=10.244.0.171:8090}: linkerd_reconnect: Disconnected backoff=false
[     0.007541s] DEBUG ThreadId(01) watch{port=4191}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}:endpoint{addr=10.244.0.171:8090}: linkerd_reconnect: Creating service backoff=false
[     0.007547s] DEBUG ThreadId(01) watch{port=4191}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}:endpoint{addr=10.244.0.171:8090}: linkerd_proxy_transport::connect: Connecting server.addr=10.244.0.171:8090
[     0.007822s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: trust_dns_proto::udp::udp_client_stream: received message id: 24515
[     0.007836s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: trust_dns_resolver::error: Response:; header 24515:RESPONSE:RD,AA:NoError:QUERY:1/0/1
; query
;; linkerd-identity-headless.linkerd.svc.cluster.local. IN SRV
; answers 1
linkerd-identity-headless.linkerd.svc.cluster.local. 5 IN SRV 0 100 8080 10-244-0-33.linkerd-identity-headless.linkerd.svc.cluster.local.
; nameservers 0
; additionals 1
10-244-0-33.linkerd-identity-headless.linkerd.svc.cluster.local. 5 IN A 10.244.0.33

[     0.007848s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: trust_dns_resolver::error: Response:; header 24515:RESPONSE:RD,AA:NoError:QUERY:1/0/1
; query
;; linkerd-identity-headless.linkerd.svc.cluster.local. IN SRV
; answers 1
linkerd-identity-headless.linkerd.svc.cluster.local. 5 IN SRV 0 100 8080 10-244-0-33.linkerd-identity-headless.linkerd.svc.cluster.local.
; nameservers 0
; additionals 1
10-244-0-33.linkerd-identity-headless.linkerd.svc.cluster.local. 5 IN A 10.244.0.33

[     0.007866s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_dns: ttl=4.99999471s addrs=[10.244.0.33:8080]
[     0.007870s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_proxy_dns_resolve: addrs=[10.244.0.33:8080] name=linkerd-identity-headless.linkerd.svc.cluster.local:8080
[     0.007880s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_proxy_balance::discover::from_resolve: Changed change=Insert(10.244.0.33:8080, ())
[     0.007892s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:endpoint{addr=10.244.0.33:8080}: linkerd_reconnect: Disconnected backoff=false
[     0.007895s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:endpoint{addr=10.244.0.33:8080}: linkerd_reconnect: Creating service backoff=false
[     0.007899s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:endpoint{addr=10.244.0.33:8080}: linkerd_proxy_transport::connect: Connecting server.addr=10.244.0.33:8080
[     0.008117s] DEBUG ThreadId(01) watch{port=4191}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}:endpoint{addr=10.244.0.171:8090}:h2: linkerd_proxy_transport::connect: Connected local.addr=10.244.1.35:32782 keepalive=Some(10s)
[     0.010241s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:endpoint{addr=10.244.0.33:8080}:h2: linkerd_proxy_transport::connect: Connected local.addr=10.244.1.35:54138 keepalive=Some(10s)
[     0.010470s] DEBUG ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: trust_dns_proto::udp::udp_client_stream: received message id: 34571
[     0.010482s] DEBUG ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: trust_dns_resolver::error: Response:; header 34571:RESPONSE:RD,AA:NoError:QUERY:1/0/1
; query
;; linkerd-dst-headless.linkerd.svc.cluster.local. IN SRV
; answers 1
linkerd-dst-headless.linkerd.svc.cluster.local. 5 IN SRV 0 100 8086 10-244-0-171.linkerd-dst-headless.linkerd.svc.cluster.local.
; nameservers 0
; additionals 1
10-244-0-171.linkerd-dst-headless.linkerd.svc.cluster.local. 5 IN A 10.244.0.171

[     0.010494s] DEBUG ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: trust_dns_resolver::error: Response:; header 34571:RESPONSE:RD,AA:NoError:QUERY:1/0/1
; query
;; linkerd-dst-headless.linkerd.svc.cluster.local. IN SRV
; answers 1
linkerd-dst-headless.linkerd.svc.cluster.local. 5 IN SRV 0 100 8086 10-244-0-171.linkerd-dst-headless.linkerd.svc.cluster.local.
; nameservers 0
; additionals 1
10-244-0-171.linkerd-dst-headless.linkerd.svc.cluster.local. 5 IN A 10.244.0.171

[     0.010510s] DEBUG ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: linkerd_dns: ttl=4.999994449s addrs=[10.244.0.171:8086]
[     0.010515s] DEBUG ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: linkerd_proxy_dns_resolve: addrs=[10.244.0.171:8086] name=linkerd-dst-headless.linkerd.svc.cluster.local:8086
[     0.010525s] DEBUG ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: linkerd_proxy_balance::discover::from_resolve: Changed change=Insert(10.244.0.171:8086, ())
[     0.011916s] DEBUG ThreadId(01) watch{port=4191}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}:endpoint{addr=10.244.0.171:8090}:h2: linkerd_tls::client:
[     0.012093s] DEBUG ThreadId(01) watch{port=4191}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}:endpoint{addr=10.244.0.171:8090}: linkerd_reconnect: Connected
[     0.012121s] DEBUG ThreadId(01) watch{port=8443}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: linkerd_stack::failfast: Service has become unavailable
[     0.012173s] DEBUG ThreadId(01) watch{port=9995}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: linkerd_stack::failfast: Service has become unavailable
[     0.012730s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:endpoint{addr=10.244.0.33:8080}:h2: linkerd_tls::client:
[     0.012861s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:endpoint{addr=10.244.0.33:8080}: linkerd_reconnect: Connected
[     0.017997s] DEBUG ThreadId(01) watch{port=4191}: linkerd_app_inbound::policy::api: policy=ServerPolicy { protocol: Detect { http: [Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Prefix("/")), headers: [], query_params: [], method: None }], policy: RoutePolicy { meta: Default { name: "default" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }, Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Exact("/live")), headers: [], query_params: [], method: Some(GET) }, MatchRequest { path: Some(Exact("/ready")), headers: [], query_params: [], method: Some(GET) }], policy: RoutePolicy { meta: Default { name: "probe" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "probe" } }, Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }], timeout: 10s, tcp_authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }] }, meta: Default { name: "all-unauthenticated" } }
[     0.020401s] DEBUG ThreadId(01) watch{port=8443}: linkerd_app_inbound::policy::api: policy=ServerPolicy { protocol: Opaque([Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }]), meta: Default { name: "all-unauthenticated" } }
[     0.021079s] DEBUG ThreadId(01) watch{port=9995}: linkerd_app_inbound::policy::api: policy=ServerPolicy { protocol: Detect { http: [Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Prefix("/")), headers: [], query_params: [], method: None }], policy: RoutePolicy { meta: Default { name: "default" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }, Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Exact("/ping")), headers: [], query_params: [], method: Some(GET) }, MatchRequest { path: Some(Exact("/ready")), headers: [], query_params: [], method: Some(GET) }], policy: RoutePolicy { meta: Default { name: "probe" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "probe" } }, Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }], timeout: 10s, tcp_authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }] }, meta: Default { name: "all-unauthenticated" } }
[     0.044476s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_meshtls_rustls::creds::store: Certified
[     0.044513s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_proxy_identity_client::certify: Identity certified expiry=SystemTime { tv_sec: 1687005509, tv_nsec: 0 }
[     0.044519s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_proxy_identity_client::certify: Waiting to refresh identity sleep=60493.816653622s
[     0.044544s]  INFO ThreadId(02) daemon:identity: linkerd_app: Certified identity id=linkerd-proxy-injector.linkerd.serviceaccount.identity.linkerd.cluster.local
[     0.044552s] DEBUG ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_proxy_balance::discover::buffer: Discovery receiver dropped
[     0.142062s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=127.0.0.1:46770}: linkerd_tls::server: Peeked bytes from TCP stream sz=0
[     0.142079s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=127.0.0.1:46770}: linkerd_tls::server: Attempting to buffer TLS ClientHello after incomplete peek
[     0.142083s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=127.0.0.1:46770}: linkerd_tls::server: Reading bytes from TCP stream buf.capacity=8192
[     0.142088s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=127.0.0.1:46770}: linkerd_tls::server: Read bytes from TCP stream buf.len=45
[     0.142134s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=127.0.0.1:46770}: linkerd_detect: Detected protocol protocol=Some(HTTP/1) elapsed=6.873µs
[     0.142145s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=127.0.0.1:46770}: linkerd_proxy_http::server: Creating HTTP service version=HTTP/1
[     0.142156s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=127.0.0.1:46770}: linkerd_proxy_http::server: Handling as HTTP version=HTTP/1
[     0.142281s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=127.0.0.1:46770}:http: linkerd_app_inbound::policy::http: Request authorized server.group= server.kind=default server.name=all-unauthenticated route.group= route.kind=default route.name=probe authz.group= authz.kind=default authz.name=probe client.tls=None(NoClientHello) client.ip=127.0.0.1
[     0.142534s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=127.0.0.1:46770}:http: linkerd_proxy_http::server: The client is shutting down the connection res=Ok(())
[     0.142559s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=127.0.0.1:46770}: linkerd_app_core::serve: Connection closed
[     0.310046s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.21:50850}:server{port=443}: linkerd_idle_cache: Caching new value key=443
[     0.310076s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.21:50850}:server{port=443}: linkerd_app_inbound::accept: Accepted policy=ServerPolicy { protocol: Detect { http: [Route { hosts: [], rules: [Rule { matches: [], policy: RoutePolicy { meta: Default { name: "default" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }], filters: [] } }] }], timeout: 10s, tcp_authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }] }, meta: Default { name: "all-unauthenticated" } }
[     0.310142s] DEBUG ThreadId(01) evict{key=443}: linkerd_idle_cache: Awaiting idleness
[     0.310405s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.21:50850}:server{port=443}: linkerd_tls::server: Peeked bytes from TCP stream sz=0
[     0.310413s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.21:50850}:server{port=443}: linkerd_tls::server: Attempting to buffer TLS ClientHello after incomplete peek
[     0.310416s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.21:50850}:server{port=443}: linkerd_tls::server: Reading bytes from TCP stream buf.capacity=8192
[     0.310421s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.21:50850}:server{port=443}: linkerd_tls::server: Read bytes from TCP stream buf.len=317
[     0.310447s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.21:50850}:server{port=443}: linkerd_detect: Detected protocol protocol=None elapsed=3.678µs
[     0.310467s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.21:50850}:server{port=443}: linkerd_app_inbound::policy::tcp: Connection authorized permit=ServerPermit { dst: OrigDstAddr(10.244.1.35:443), protocol: Detect { http: [Route { hosts: [], rules: [Rule { matches: [], policy: RoutePolicy { meta: Default { name: "default" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }], filters: [] } }] }], timeout: 10s, tcp_authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }] }, labels: ServerAuthzLabels { server: ServerLabel(Default { name: "all-unauthenticated" }), authz: Default { name: "all-unauthenticated" } } } tls=None(NoClientHello) client=10.244.0.21:50850
[     0.310499s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.21:50850}:server{port=443}:tcp:tcp: linkerd_proxy_transport::connect: Connecting server.addr=10.244.1.35:443
[     0.310681s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.21:50850}: linkerd_app_core::serve: Connection closed reason=server 10.244.1.35:443: Connection refused (os error 111)
[     0.312877s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.21:50850}:server{port=443}:watch{port=443}: linkerd_app_inbound::policy::api: policy=ServerPolicy { protocol: Detect { http: [Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Prefix("/")), headers: [], query_params: [], method: None }], policy: RoutePolicy { meta: Default { name: "default" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }], timeout: 10s, tcp_authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }] }, meta: Default { name: "all-unauthenticated" } }
[     0.398869s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.0.21:50858}:server{port=443}: linkerd_app_inbound::accept: Accepted policy=ServerPolicy { protocol: Detect { http: [Route { hosts: [], rules: [Rule { matches: [MatchRequest { path: Some(Prefix("/")), headers: [], query_params: [], method: None }], policy: RoutePolicy { meta: Default { name: "default" }, authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }], filters: [] } }] }], timeout: 10s, tcp_authorizations: [Authorization { networks: [Network { net: 0.0.0.0/0, except: [] }, Network { net: ::/0, except: [] }], authentication: Unauthenticated, meta: Default { name: "all-unauthenticated" } }, Authorization { networks: [Network { net: 127.0.0.1/32, except: [] }, Network { net: ::1/128, except: [] }], authentication: Unauthenticated, meta: Default { name: "localhost" } }] }, meta: Default { name: "all-unauthenticated" } }

Has anyone run into the same error? if so were you able to fix it?

adleong commented 1 year ago

Hi @Quincy475. The log line Failed to connect error=endpoint 127.0.0.1:8090: Connection refused suggests that the policy controller isn't accepting connections so I think @chris-ng-scmp may be on the right track by looking at the policy controller logs. Are you sure that the policy controller is running correctly?

You also mentioned that you reinstalled Linkerd. After reinstalling do you still see these same errors in the destination pod's proxy logs?

Quincy475 commented 1 year ago

Hi @adleong, the error is still present after reinstalling Linkerd. These are the logs for the policy container

2023-06-21T12:19:33.858529Z INFO linkerd_policy_controller: Lease already exists, no need to create it 2023-06-21T12:19:33.868759Z INFO grpc{port=8090}: linkerd_policy_controller: policy gRPC server listening addr=0.0.0.0:8090

these are the debug logs

I hope this helps.

chris-ng-scmp commented 1 year ago

Hi @Quincy475. The log line Failed to connect error=endpoint 127.0.0.1:8090: Connection refused suggests that the policy controller isn't accepting connections so I think @chris-ng-scmp may be on the right track by looking at the policy controller logs. Are you sure that the policy controller is running correctly?

You also mentioned that you reinstalled Linkerd. After reinstalling do you still see these same errors in the destination pod's proxy logs?

hi @adleong

No, the policy container in linkerd-destination is not running with the following error:

2023-06-23T07:49:32.210373Z ERROR kube_client::client::builder: failed with error error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1921:: unable to get issuer certificate
Error: HyperError: error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1921:: unable to get issuer certificate

Caused by:
    0: error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1921:: unable to get issuer certificate
    1: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1921:: unable to get issuer certificate
    2: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1921:
    3: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1921:

I still got this error with the latest version 2.13.5 Status check results are all √ in linkerd check --pre

chris-ng-scmp commented 1 year ago

hi @adleong

Finally, found that the issue is caused by the cluster CA cert generated by the cloud provider is not valid with the error unable to get issuer certificate.

Wondering any chance I could set the policy's kube client to accept an invalid cert?

I can see the Rust kube client have the option https://docs.rs/kube/latest/kube/struct.Config.html But not sure how to pass it from linkered...

Many thanks

Nilubkal commented 1 year ago

Hi @chris-ng-scmp , i'm facing the exact same issue ( or similar enough ) when deploying into a kubernetes AKS managed cluster in Azure. The interesting part here is that when i change versions between 2.13.2 - 2.13.5 in some AKS versions .2 works ( k8s 1.24.6 ) in others .5 ( k8s 1.26.3 )0 works ( able to bootstrap the control-plane trough the policy container ) Here is the log output from a .5 which cycles trough errors until it manages to succeed :

 [    40.351109s]  WARN ThreadId(01) watch{port=4191}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[    40.853137s]  INFO ThreadId(01) linkerd_stack::failfast: Service has recovered 

As you can see for 40 seconds it goes with Connection refused ( os error 111 ) until finally ( not clear why ) its able to recover. Its also unclear why with different versions of k8s different linkerd minor versions are able to recover ( below the helm threshold , which is usually 5m) and bootstrap the cluster during a fresh install. I wonder if there is a helm deployment flag that skips the kube client cert check that you mentioned ?

tath81 commented 1 year ago

I've installed Linkerd stable-2-13-6 and also seeing the same error.

chris-ng-scmp commented 1 year ago

Hi @chris-ng-scmp , i'm facing the exact same issue ( or similar enough ) when deploying into a kubernetes AKS managed cluster in Azure. The interesting part here is that when i change versions between 2.13.2 - 2.13.5 in some AKS versions .2 works ( k8s 1.24.6 ) in others .5 ( k8s 1.26.3 )0 works ( able to bootstrap the control-plane trough the policy container ) Here is the log output from a .5 which cycles trough errors until it manages to succeed :

 [    40.351109s]  WARN ThreadId(01) watch{port=4191}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[    40.853137s]  INFO ThreadId(01) linkerd_stack::failfast: Service has recovered 

As you can see for 40 seconds it goes with Connection refused ( os error 111 ) until finally ( not clear why ) its able to recover. Its also unclear why with different versions of k8s different linkerd minor versions are able to recover ( below the helm threshold , which is usually 5m) and bootstrap the cluster during a fresh install. I wonder if there is a helm deployment flag that skips the kube client cert check that you mentioned ?

No, I found no way to set Linkerd to skip the K8S API cert validation

My solution is to create an ingress with a valid cert (by cert-manager), point the ingress to the K8S API endpoint

Custom image for extension-init and policy-controller as these containers are required to call K8S API Both images install with ca-certificates to trust the ingress's cert in system level and a custom ENTRYPOINT to override the KUBERNETES_SERVICE_HOST value to the new custom ingress host

FROM cr.l5d.io/linkerd/extension-init:v0.1.0 AS builder

FROM alpine:latest
COPY --from=builder /bin/linkerd-extension-init /bin
RUN chmod 777 /bin/linkerd-extension-init
RUN apk update
RUN apk add ca-certificates

ENV SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt

ENTRYPOINT ["/bin/sh", "-c", "KUBERNETES_SERVICE_HOST=myk8s.myhost.com exec /bin/linkerd-extension-init \"$@\"", "--"]
FROM cr.l5d.io/linkerd/policy-controller:stable-2.13.5 AS builder

FROM alpine:latest
COPY --from=builder /bin/linkerd-policy-controller /bin
RUN chmod 777 /bin/linkerd-policy-controller
RUN apk update
RUN apk add ca-certificates

ENV SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt

ENTRYPOINT ["/bin/sh", "-c", "KUBERNETES_SERVICE_HOST=myk8s.myhost.com exec /bin/linkerd-policy-controller \"$@\"", "--"]

This is a tricky hack and take your own risk, remember setup a whietlist in the ingress to allow only the cluster POD IPs

adleong commented 1 year ago

Is this the same issue as https://github.com/linkerd/linkerd2/issues/11237? Perhaps we can close of these as a duplicate of the other.

paulovitorcl commented 1 year ago

Hi, i'm having the same problem.

When i'm looking at logs of linkerd destination, i saw this too:

[     2.269280s]  WARN ThreadId(01) watch{port=4191}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     2.770167s]  WARN ThreadId(01) watch{port=4191}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]

I am concerned that this might cause a problem in my production environment.

omidraha commented 1 year ago

I installed linkerd with helm by pulumi and it seems I have same issue:

I also tried with helm:

helm repo add linkerd https://helm.linkerd.io/stable
helm install linkerd-crds linkerd/linkerd-crds -n linkerd --create-namespace
helm install linkerd-control-plane -n linkerd --set clusterNetworks="10.0.0.0/8" --set-file identityTrustAnchorsPEM=ca.crt --set-file identity.issuer.tls.crtPEM=issuer.crt --set-file identity.issuer.tls.keyPEM=issuer.key linkerd/linkerd-control-plane
helm install linkerd-viz -n linkerd-viz --create-namespace linkerd/linkerd-viz
curl -sL https://run.linkerd.io/install | sh

Info

linkerd viz check

linkerd-viz
-----------
√ linkerd-viz Namespace exists
√ can initialize the client
√ linkerd-viz ClusterRoles exist
√ linkerd-viz ClusterRoleBindings exist
√ tap API server has valid cert
√ tap API server cert is valid for at least 60 days
√ tap API service is running
√ linkerd-viz pods are injected
√ viz extension pods are running
√ viz extension proxies are healthy
√ viz extension proxies are up-to-date
√ viz extension proxies and cli versions match
√ prometheus is installed and configured correctly
√ viz extension self-check

Info

$ linkerd check

kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version

linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ control plane pods are ready
√ cluster networks contains all pods
√ cluster networks contains all services

linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ proxy-init container runs as root user if docker container runtime is used

linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor

linkerd-webhooks-and-apisvc-tls
-------------------------------
√ proxy-injector webhook has valid cert
√ proxy-injector cert is valid for at least 60 days
√ sp-validator webhook has valid cert
√ sp-validator cert is valid for at least 60 days
√ policy-validator webhook has valid cert
√ policy-validator cert is valid for at least 60 days

linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date

control-plane-version
---------------------
√ can retrieve the control plane version
√ control plane is up-to-date
√ control plane and cli versions match

linkerd-control-plane-proxy
---------------------------
√ control plane proxies are healthy
√ control plane proxies are up-to-date
√ control plane proxies and cli versions match

Info

kubectl logs -n linkerd  linkerd-destination-68947cb4b6-zv7f2 

Defaulted container "linkerd-proxy" out of: linkerd-proxy, destination, sp-validator, policy, linkerd-init (init)
[     0.005367s]  INFO ThreadId(01) linkerd2_proxy: release 2.207.0 (9fa90df) by linkerd on 2023-08-03T17:25:23Z
[     0.009722s]  INFO ThreadId(01) linkerd2_proxy::rt: Using single-threaded proxy runtime
[     0.010803s]  INFO ThreadId(01) linkerd2_proxy: Admin interface on 0.0.0.0:4191
[     0.010815s]  INFO ThreadId(01) linkerd2_proxy: Inbound interface on 0.0.0.0:4143
[     0.010817s]  INFO ThreadId(01) linkerd2_proxy: Outbound interface on 127.0.0.1:4140
[     0.010819s]  INFO ThreadId(01) linkerd2_proxy: Tap DISABLED
[     0.010821s]  INFO ThreadId(01) linkerd2_proxy: Local identity is linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
[     0.010823s]  INFO ThreadId(01) linkerd2_proxy: Identity verified via linkerd-identity-headless.linkerd.svc.cluster.local:8080 (linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local)
[     0.010825s]  INFO ThreadId(01) linkerd2_proxy: Destinations resolved via localhost:8086
[     0.011488s]  WARN ThreadId(01) watch{port=8443}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     0.016689s]  WARN ThreadId(02) identity:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_app_core::control: Failed to resolve control-plane component error=failed SRV and A record lookups: failed to resolve SRV record: no record found for Query { name: Name("linkerd-identity-headless.linkerd.svc.cluster.local."), query_type: SRV, query_class: IN }; failed to resolve A record: no record found for Query { name: Name("linkerd-identity-headless.linkerd.svc.cluster.local."), query_type: AAAA, query_class: IN } error.sources=[failed to resolve A record: no record found for Query { name: Name("linkerd-identity-headless.linkerd.svc.cluster.local."), query_type: AAAA, query_class: IN }, no record found for Query { name: Name("linkerd-identity-headless.linkerd.svc.cluster.local."), query_type: AAAA, query_class: IN }]
[     0.022153s]  WARN ThreadId(02) identity:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_app_core::control: Failed to resolve control-plane component error=failed SRV and A record lookups: failed to resolve SRV record: no record found for Query { name: Name("linkerd-identity-headless.linkerd.svc.cluster.local."), query_type: SRV, query_class: IN }; failed to resolve A record: no record found for Query { name: Name("linkerd-identity-headless.linkerd.svc.cluster.local."), query_type: AAAA, query_class: IN } error.sources=[failed to resolve A record: no record found for Query { name: Name("linkerd-identity-headless.linkerd.svc.cluster.local."), query_type: AAAA, query_class: IN }, no record found for Query { name: Name("linkerd-identity-headless.linkerd.svc.cluster.local."), query_type: AAAA, query_class: IN }]
[     0.112772s]  WARN ThreadId(01) watch{port=8443}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     2.746631s]  WARN ThreadId(01) watch{port=8443}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     3.012838s]  WARN ThreadId(02) identity:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_stack::failfast: Service entering failfast after 3s
[     3.013048s] ERROR ThreadId(02) identity: linkerd_proxy_identity_client::certify: Failed to obtain identity error=status: Unknown, message: "controller linkerd-identity-headless.linkerd.svc.cluster.local:8080: service in fail-fast", details: [], metadata: MetadataMap { headers: {} } error.sources=[controller linkerd-identity-headless.linkerd.svc.cluster.local:8080: service in fail-fast, service in fail-fast]
[     3.247584s]  WARN ThreadId(01) watch{port=8443}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     9.262576s]  WARN ThreadId(01) watch{port=8443}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[     9.764811s]  WARN ThreadId(01) watch{port=8443}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[    10.012484s]  WARN ThreadId(01) watch{port=8443}:controller{addr=localhost:8090}: linkerd_stack::failfast: Service entering failfast after 10s
[    12.774663s]  WARN ThreadId(01) watch{port=8443}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[    13.075815s]  INFO ThreadId(02) daemon:identity: linkerd_app: Certified identity id=linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
[    13.276658s]  WARN ThreadId(01) watch{port=8443}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[    14.280826s]  WARN ThreadId(01) watch{port=8443}:controller{addr=localhost:8090}:endpoint{addr=127.0.0.1:8090}: linkerd_reconnect: Failed to connect error=endpoint 127.0.0.1:8090: Connection refused (os error 111) error.sources=[Connection refused (os error 111)]
[    14.782252s]  INFO ThreadId(01) linkerd_stack::failfast: Service has recovered

Info

kubectl -n linkerd logs deploy/linkerd-destination -c policy

2023-09-13T23:37:03.421528Z  INFO linkerd_policy_controller: created Lease resource lease=Lease 
{ metadata: ObjectMeta { annotations: None, cluster_name: None, 
creation_timestamp: Some(Time(2023-09-13T23:37:03Z)), deletion_grace_period_seconds: None, deletion_timestamp: 
None, finalizers: None, generate_name: None, generation: None,
 labels: Some({"linkerd.io/control-plane-component": "destination", "linkerd.io/control-plane-ns": "linkerd"}), 
 managed_fields: Some([ManagedFieldsEntry { api_version: Some("coordination.k8s.io/v1"), fields_type: Some("FieldsV1"), 
 fields_v1: Some(FieldsV1(Object {"f:metadata": Object {"f:labels": Object {"f:linkerd.io/control-plane-component": Object {}, "f:linkerd.io/control-plane-ns": Object {}}, 
 "f:ownerReferences": Object {"k:{\"uid\":\"4755a22d-6e67-4a54-aaac-a53b1586daa4\"}": Object {}}}})), manager: Some("policy-controller"), 
 operation: Some("Apply"), time: Some(Time(2023-09-13T23:37:03Z)) }]), name: Some("policy-controller-write"), namespace: Some("linkerd"), 
 owner_references: Some([OwnerReference { api_version: "apps/v1", block_owner_deletion: None, controller: Some(true), kind: "Deployment", 
 name: "linkerd-destination", uid: "4755a22d-6e67-4a54-aaac-a53b1586daa4" }]), resource_version: Some("1749910"), self_link: None, uid: Some("fae11ba8-c66c-422f-85e9-efffb531b745") }, 
 spec: Some(LeaseSpec { acquire_time: None, holder_identity: None, lease_duration_seconds: None, lease_transitions: None, renew_time: None }) }
2023-09-13T23:37:03.437354Z  INFO grpc{port=8090}: linkerd_policy_controller: policy gRPC server listening addr=0.0.0.0:8090

Info

kubectl get  services -n linkerd 
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
linkerd-dst                 ClusterIP   10.100.229.176   <none>        8086/TCP   3m45s
linkerd-dst-headless        ClusterIP   None             <none>        8086/TCP   3m45s
linkerd-identity            ClusterIP   10.100.36.200    <none>        8080/TCP   3m45s
linkerd-identity-headless   ClusterIP   None             <none>        8080/TCP   3m45s
linkerd-policy              ClusterIP   None             <none>        8090/TCP   3m45s
linkerd-policy-validator    ClusterIP   10.100.135.55    <none>        443/TCP    3m45s
linkerd-proxy-injector      ClusterIP   10.100.209.144   <none>        443/TCP    3m45s
linkerd-sp-validator        ClusterIP   10.100.152.85    <none>        443/TCP    3m45s

Info

kubectl get  services -n linkerd-viz
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
metrics-api    ClusterIP   10.100.226.239   <none>        8085/TCP            2m11s
prometheus     ClusterIP   10.100.142.228   <none>        9090/TCP            2m11s
tap            ClusterIP   10.100.129.247   <none>        8088/TCP,443/TCP    2m11s
tap-injector   ClusterIP   10.100.239.159   <none>        443/TCP             2m11s
web            ClusterIP   10.100.199.49    <none>        8084/TCP,9994/TCP   2m11s

Info

kubectl get  pods -n linkerd-viz
NAME                            READY   STATUS    RESTARTS   AGE
metrics-api-75f76fbd65-cztmk    2/2     Running   0          2m34s
prometheus-7c74c74478-2shf5     2/2     Running   0          2m34s
tap-f6fb8549b-xb9rb             2/2     Running   0          2m34s
tap-injector-675ff7d8cc-dfx8h   2/2     Running   0          2m34s
web-78c46f4b57-prk84            2/2     Running   0          2m34s

kubectl get  pods -n linkerd
NAME                                      READY   STATUS    RESTARTS   AGE
linkerd-destination-7579fbc6c4-dp98t      4/4     Running   0          5m20s
linkerd-identity-7f9fc8845-chtjs          2/2     Running   0          5m21s
linkerd-proxy-injector-56cbdcb47c-7x5lx   2/2     Running   0          5m21s

Info

linkerd viz dashboard --verbose

DEBU[0001] Skipping check: cluster networks contains all node podCIDRs. Reason: skipping check because the nodes aren't exposing podCIDR 
DEBU[0001] Retrying on error: no running pods found for metrics-api 
Waiting for linkerd-viz extension to become available
DEBU[0006] Retrying on error: no running pods found for metrics-api 

Info

linkerd diagnostics controller-metrics

#
# POD linkerd-destination-7579fbc6c4-kc68s (1 of 5)
# CONTAINER destination 
#
# HELP cluster_store_size The number of linked clusters in the remote discovery cluster store
# TYPE cluster_store_size gauge
cluster_store_size 0
# HELP endpoints_cache_size Number of items in the client-go endpoints cache
# TYPE endpoints_cache_size gauge
endpoints_cache_size{cluster="local"} 31
# HELP endpoints_exists A gauge which is 1 if the endpoints exists and 0 if it does not.
# TYPE endpoints_exists gauge
endpoints_exists{cluster="local",hostname="",namespace="default",port="443",service="kubernetes"} 1
endpoints_exists{cluster="local",hostname="",namespace="linkerd-viz",port="8085",service="metrics-api"} 1
# HELP endpoints_pods A gauge for the current number of pods in a endpoints.
# TYPE endpoints_pods gauge
endpoints_pods{cluster="local",hostname="",namespace="default",port="443",service="kubernetes"} 2
endpoints_pods{cluster="local",hostname="",namespace="linkerd-viz",port="8085",service="metrics-api"} 1
# HELP endpoints_subscribers A gauge for the current number of subscribers to a endpoints.
# TYPE endpoints_subscribers gauge
endpoints_subscribers{cluster="local",hostname="",namespace="default",port="443",service="kubernetes"} 4
endpoints_subscribers{cluster="local",hostname="",namespace="linkerd-viz",port="8085",service="metrics-api"} 1
# HELP endpoints_updates A counter for number of updates to a endpoints.
# TYPE endpoints_updates counter
endpoints_updates{cluster="local",hostname="",namespace="default",port="443",service="kubernetes"} 1
endpoints_updates{cluster="local",hostname="",namespace="linkerd-viz",port="8085",service="metrics-api"} 2
# HELP endpointslices_cache_size Number of items in the client-go endpointslices cache
# TYPE endpointslices_cache_size gauge
endpointslices_cache_size{cluster="local"} 31
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.2693e-05
go_gc_duration_seconds{quantile="0.25"} 3.8598e-05
go_gc_duration_seconds{quantile="0.5"} 5.2425e-05
go_gc_duration_seconds{quantile="0.75"} 8.3807e-05
go_gc_duration_seconds{quantile="1"} 0.001470804
go_gc_duration_seconds_sum 0.002248899
go_gc_duration_seconds_count 15
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 149
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.19.12"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 1.1199408e+07
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 3.6434632e+07
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.469838e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 160049
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 1.016628e+07
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 1.1199408e+07
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 1.949696e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 1.3484032e+07
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 67602
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 1.949696e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.5433728e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.694819372259176e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 227651
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 1200
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 15600
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 158976
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 162720
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.8128048e+07
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 618210
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 1.343488e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 1.343488e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 2.9209864e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 6
# HELP grpc_server_handled_total Total number of RPCs completed on the server, regardless of success or failure.
# TYPE grpc_server_handled_total counter
grpc_server_handled_total{grpc_code="OK",grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream"} 1
grpc_server_handled_total{grpc_code="OK",grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream"} 4
# HELP grpc_server_handling_seconds Histogram of response latency (seconds) of gRPC that had been application-level handled by the server.
# TYPE grpc_server_handling_seconds histogram
grpc_server_handling_seconds_bucket{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.005"} 0
grpc_server_handling_seconds_bucket{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.01"} 0
grpc_server_handling_seconds_bucket{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.025"} 0
grpc_server_handling_seconds_bucket{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.05"} 0
grpc_server_handling_seconds_bucket{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.1"} 0
grpc_server_handling_seconds_bucket{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.25"} 0
grpc_server_handling_seconds_bucket{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.5"} 0
grpc_server_handling_seconds_bucket{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="1"} 0
grpc_server_handling_seconds_bucket{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="2.5"} 0
grpc_server_handling_seconds_bucket{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="5"} 0
grpc_server_handling_seconds_bucket{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="10"} 0
grpc_server_handling_seconds_bucket{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="+Inf"} 1
grpc_server_handling_seconds_sum{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream"} 95.023649395
grpc_server_handling_seconds_count{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream"} 1
grpc_server_handling_seconds_bucket{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.005"} 0
grpc_server_handling_seconds_bucket{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.01"} 0
grpc_server_handling_seconds_bucket{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.025"} 0
grpc_server_handling_seconds_bucket{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.05"} 0
grpc_server_handling_seconds_bucket{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.1"} 0
grpc_server_handling_seconds_bucket{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.25"} 0
grpc_server_handling_seconds_bucket{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="0.5"} 0
grpc_server_handling_seconds_bucket{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="1"} 0
grpc_server_handling_seconds_bucket{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="2.5"} 0
grpc_server_handling_seconds_bucket{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="5"} 0
grpc_server_handling_seconds_bucket{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="10"} 1
grpc_server_handling_seconds_bucket{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream",le="+Inf"} 4
grpc_server_handling_seconds_sum{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream"} 606.669751239
grpc_server_handling_seconds_count{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream"} 4
# HELP grpc_server_msg_received_total Total number of RPC stream messages received on the server.
# TYPE grpc_server_msg_received_total counter
grpc_server_msg_received_total{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream"} 6
grpc_server_msg_received_total{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream"} 29
# HELP grpc_server_msg_sent_total Total number of gRPC stream messages sent by the server.
# TYPE grpc_server_msg_sent_total counter
grpc_server_msg_sent_total{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream"} 7
grpc_server_msg_sent_total{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream"} 29
# HELP grpc_server_started_total Total number of RPCs started on the server.
# TYPE grpc_server_started_total counter
grpc_server_started_total{grpc_method="Get",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream"} 6
grpc_server_started_total{grpc_method="GetProfile",grpc_service="io.linkerd.proxy.destination.Destination",grpc_type="server_stream"} 29
# HELP http_client_in_flight_requests A gauge of in-flight requests for the wrapped client.
# TYPE http_client_in_flight_requests gauge
http_client_in_flight_requests{client="k8s"} 0
http_client_in_flight_requests{client="l5dCrd"} 0
# HELP http_client_request_latency_seconds A histogram of request latencies.
# TYPE http_client_request_latency_seconds histogram
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="0.01"} 27
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="0.02"} 31
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="0.03"} 33
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="0.04"} 34
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="0.05"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="0.1"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="0.2"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="0.30000000000000004"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="0.4"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="0.5"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="1"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="2"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="3"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="4"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="5"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="10"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="20"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="30"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="40"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="50"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="200",method="get",le="+Inf"} 36
http_client_request_latency_seconds_sum{client="k8s",code="200",method="get"} 0.3293491680000001
http_client_request_latency_seconds_count{client="k8s",code="200",method="get"} 36
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.01"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.02"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.03"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.04"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.05"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.1"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.2"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.30000000000000004"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.4"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.5"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="1"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="2"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="3"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="4"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="5"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="10"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="20"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="30"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="40"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="50"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="+Inf"} 2
http_client_request_latency_seconds_sum{client="k8s",code="201",method="post"} 0.003747705
http_client_request_latency_seconds_count{client="k8s",code="201",method="post"} 2
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="0.01"} 6
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="0.02"} 6
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="0.03"} 6
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="0.04"} 6
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="0.05"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="0.1"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="0.2"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="0.30000000000000004"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="0.4"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="0.5"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="1"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="2"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="3"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="4"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="5"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="10"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="20"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="30"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="40"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="50"} 8
http_client_request_latency_seconds_bucket{client="l5dCrd",code="200",method="get",le="+Inf"} 8
http_client_request_latency_seconds_sum{client="l5dCrd",code="200",method="get"} 0.112143038
http_client_request_latency_seconds_count{client="l5dCrd",code="200",method="get"} 8
# HELP http_client_requests_total A counter for requests from the wrapped client.
# TYPE http_client_requests_total counter
http_client_requests_total{client="k8s",code="200",method="get"} 36
http_client_requests_total{client="k8s",code="201",method="post"} 2
http_client_requests_total{client="l5dCrd",code="200",method="get"} 8
# HELP job_cache_size Number of items in the client-go job cache
# TYPE job_cache_size gauge
job_cache_size{cluster="local"} 0
# HELP node_cache_size Number of items in the client-go node cache
# TYPE node_cache_size gauge
node_cache_size{cluster="local"} 6
# HELP pod_cache_size Number of items in the client-go pod cache
# TYPE pod_cache_size gauge
pod_cache_size{cluster="local"} 52
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.76
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 19
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 5.2260864e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.69481831299e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 7.80316672e+08
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP profile_subscribers A gauge for the current number of subscribers to a profile.
# TYPE profile_subscribers gauge
profile_subscribers{namespace="default",profile="kubernetes.default.svc.cluster.local"} 4
profile_subscribers{namespace="linkerd",profile="linkerd-dst-headless.linkerd.svc.cluster.local"} 2
profile_subscribers{namespace="linkerd",profile="linkerd-identity-headless.linkerd.svc.cluster.local"} 2
profile_subscribers{namespace="linkerd",profile="linkerd-policy.linkerd.svc.cluster.local"} 2
profile_subscribers{namespace="linkerd-viz",profile="kubernetes.default.svc.cluster.local"} 4
profile_subscribers{namespace="linkerd-viz",profile="metrics-api.linkerd-viz.svc.cluster.local"} 2
profile_subscribers{namespace="linkerd-viz",profile="prometheus.linkerd-viz.svc.cluster.local"} 0
# HELP profile_updates A counter for number of updates to a profile.
# TYPE profile_updates counter
profile_updates{namespace="default",profile="kubernetes.default.svc.cluster.local"} 0
profile_updates{namespace="linkerd",profile="linkerd-dst-headless.linkerd.svc.cluster.local"} 0
profile_updates{namespace="linkerd",profile="linkerd-identity-headless.linkerd.svc.cluster.local"} 0
profile_updates{namespace="linkerd",profile="linkerd-policy.linkerd.svc.cluster.local"} 0
profile_updates{namespace="linkerd-viz",profile="kubernetes.default.svc.cluster.local"} 0
profile_updates{namespace="linkerd-viz",profile="metrics-api.linkerd-viz.svc.cluster.local"} 1
profile_updates{namespace="linkerd-viz",profile="prometheus.linkerd-viz.svc.cluster.local"} 1
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 12
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
# HELP replicaset_cache_size Number of items in the client-go replicaset cache
# TYPE replicaset_cache_size gauge
replicaset_cache_size{cluster="local"} 87
# HELP server_cache_size Number of items in the client-go server cache
# TYPE server_cache_size gauge
server_cache_size{cluster="local"} 4
# HELP server_port_subscribes Counter of subscribes to Server changes associated with a pod's port.
# TYPE server_port_subscribes counter
server_port_subscribes{name="linkerd",namespace="linkerd",port="4191"} 3
server_port_subscribes{name="linkerd",namespace="linkerd",port="9990"} 2
server_port_subscribes{name="linkerd",namespace="linkerd",port="9995"} 1
server_port_subscribes{name="linkerd",namespace="linkerd",port="9996"} 1
server_port_subscribes{name="linkerd",namespace="linkerd",port="9997"} 1
server_port_subscribes{name="metrics",namespace="linkerd-viz",port="4191"} 1
server_port_subscribes{name="metrics",namespace="linkerd-viz",port="9995"} 1
server_port_subscribes{name="sso",namespace="apps-dev",port="4191"} 1
server_port_subscribes{name="tap",namespace="linkerd-viz",port="4191"} 2
server_port_subscribes{name="tap",namespace="linkerd-viz",port="9995"} 1
server_port_subscribes{name="tap",namespace="linkerd-viz",port="9998"} 1
server_port_subscribes{name="web",namespace="linkerd-viz",port="4191"} 1
server_port_subscribes{name="web",namespace="linkerd-viz",port="9994"} 1
# HELP service_cache_size Number of items in the client-go service cache
# TYPE service_cache_size gauge
service_cache_size{cluster="local"} 31
# HELP service_subscribers Number of subscribers to Service changes.
# TYPE service_subscribers gauge
service_subscribers{name="kubernetes",namespace="default"} 4
service_subscribers{name="linkerd-dst-headless",namespace="linkerd"} 1
service_subscribers{name="linkerd-identity-headless",namespace="linkerd"} 1
service_subscribers{name="linkerd-policy",namespace="linkerd"} 1
service_subscribers{name="metrics-api",namespace="linkerd-viz"} 1
# HELP serviceprofile_cache_size Number of items in the client-go serviceprofile cache
# TYPE serviceprofile_cache_size gauge
serviceprofile_cache_size{cluster="local"} 2
#
# POD linkerd-destination-7579fbc6c4-kc68s (2 of 5)
# CONTAINER policy 
#
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 32

# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 0

# HELP process_threads Number of OS threads in the process.
# TYPE process_threads gauge
process_threads 2

# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 18866176

# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total gauge
process_cpu_seconds_total 0.41

# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 28454912

# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1048576

# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1694818313

#
# POD linkerd-destination-7579fbc6c4-kc68s (3 of 5)
# CONTAINER sp-validator 
#
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.4345e-05
go_gc_duration_seconds{quantile="0.25"} 3.9513e-05
go_gc_duration_seconds{quantile="0.5"} 8.0622e-05
go_gc_duration_seconds{quantile="0.75"} 0.000233387
go_gc_duration_seconds{quantile="1"} 0.000458562
go_gc_duration_seconds_sum 0.001511716
go_gc_duration_seconds_count 12
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 15
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.19.12"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 5.49988e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.4238336e+07
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.456342e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 68483
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 9.730328e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 5.49988e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 4.702208e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 7.356416e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 22989
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 4.702208e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.2058624e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.6948193841355338e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 91472
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 1200
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 15600
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 82224
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 97632
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 9.012112e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 346314
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 524288
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 524288
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 2.4229128e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 6
# HELP http_client_in_flight_requests A gauge of in-flight requests for the wrapped client.
# TYPE http_client_in_flight_requests gauge
http_client_in_flight_requests{client="k8s"} 0
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.38
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 11
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 4.2110976e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.69481831333e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 7.7953024e+08
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 13
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
#
# POD linkerd-identity-7f9fc8845-lf4m5 (4 of 5)
# CONTAINER identity 
#
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.5914e-05
go_gc_duration_seconds{quantile="0.25"} 3.8882e-05
go_gc_duration_seconds{quantile="0.5"} 4.4543e-05
go_gc_duration_seconds{quantile="0.75"} 8.2365e-05
go_gc_duration_seconds{quantile="1"} 0.000219091
go_gc_duration_seconds_sum 0.000796529
go_gc_duration_seconds_count 12
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 15
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.19.12"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 5.20804e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.503912e+07
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.455846e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 78406
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 9.769408e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 5.20804e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 4.7104e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 7.24992e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 22089
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 4.325376e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.196032e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.6948194062899637e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 100495
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 1200
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 15600
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 86112
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 97632
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 9.242816e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 569874
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 622592
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 622592
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 2.4491272e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 7
# HELP grpc_server_handled_total Total number of RPCs completed on the server, regardless of success or failure.
# TYPE grpc_server_handled_total counter
grpc_server_handled_total{grpc_code="OK",grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary"} 9
# HELP grpc_server_handling_seconds Histogram of response latency (seconds) of gRPC that had been application-level handled by the server.
# TYPE grpc_server_handling_seconds histogram
grpc_server_handling_seconds_bucket{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary",le="0.005"} 0
grpc_server_handling_seconds_bucket{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary",le="0.01"} 4
grpc_server_handling_seconds_bucket{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary",le="0.025"} 7
grpc_server_handling_seconds_bucket{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary",le="0.05"} 9
grpc_server_handling_seconds_bucket{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary",le="0.1"} 9
grpc_server_handling_seconds_bucket{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary",le="0.25"} 9
grpc_server_handling_seconds_bucket{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary",le="0.5"} 9
grpc_server_handling_seconds_bucket{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary",le="1"} 9
grpc_server_handling_seconds_bucket{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary",le="2.5"} 9
grpc_server_handling_seconds_bucket{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary",le="5"} 9
grpc_server_handling_seconds_bucket{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary",le="10"} 9
grpc_server_handling_seconds_bucket{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary",le="+Inf"} 9
grpc_server_handling_seconds_sum{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary"} 0.142629468
grpc_server_handling_seconds_count{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary"} 9
# HELP grpc_server_msg_received_total Total number of RPC stream messages received on the server.
# TYPE grpc_server_msg_received_total counter
grpc_server_msg_received_total{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary"} 9
# HELP grpc_server_msg_sent_total Total number of gRPC stream messages sent by the server.
# TYPE grpc_server_msg_sent_total counter
grpc_server_msg_sent_total{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary"} 9
# HELP grpc_server_started_total Total number of RPCs started on the server.
# TYPE grpc_server_started_total counter
grpc_server_started_total{grpc_method="Certify",grpc_service="io.linkerd.proxy.identity.Identity",grpc_type="unary"} 9
# HELP http_client_in_flight_requests A gauge of in-flight requests for the wrapped client.
# TYPE http_client_in_flight_requests gauge
http_client_in_flight_requests{client="k8s"} 0
# HELP http_client_request_latency_seconds A histogram of request latencies.
# TYPE http_client_request_latency_seconds histogram
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.01"} 11
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.02"} 16
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.03"} 18
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.04"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.05"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.1"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.2"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.30000000000000004"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.4"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.5"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="1"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="2"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="3"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="4"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="5"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="10"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="20"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="30"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="40"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="50"} 19
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="+Inf"} 19
http_client_request_latency_seconds_sum{client="k8s",code="201",method="post"} 0.23184698400000006
http_client_request_latency_seconds_count{client="k8s",code="201",method="post"} 19
# HELP http_client_requests_total A counter for requests from the wrapped client.
# TYPE http_client_requests_total counter
http_client_requests_total{client="k8s",code="201",method="post"} 19
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.42
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 11
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 4.3810816e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.69481829893e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 7.79792384e+08
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 12
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
#
# POD linkerd-proxy-injector-56cbdcb47c-7g5fp (5 of 5)
# CONTAINER proxy-injector 
#
# HELP cronjob_cache_size Number of items in the client-go cronjob cache
# TYPE cronjob_cache_size gauge
cronjob_cache_size{cluster="local"} 1
# HELP daemonset_cache_size Number of items in the client-go daemonset cache
# TYPE daemonset_cache_size gauge
daemonset_cache_size{cluster="local"} 3
# HELP deployment_cache_size Number of items in the client-go deployment cache
# TYPE deployment_cache_size gauge
deployment_cache_size{cluster="local"} 33
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.6138e-05
go_gc_duration_seconds{quantile="0.25"} 4.5974e-05
go_gc_duration_seconds{quantile="0.5"} 5.1674e-05
go_gc_duration_seconds{quantile="0.75"} 6.6039e-05
go_gc_duration_seconds{quantile="1"} 8.6426e-05
go_gc_duration_seconds_sum 0.00099566
go_gc_duration_seconds_count 18
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 79
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.19.12"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 6.7688e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 5.7345464e+07
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.473894e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 444059
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 1.0248488e+07
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 6.7688e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 9.854976e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 1.0264576e+07
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 34111
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 6.479872e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 2.0119552e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.6948194326753256e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 478170
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 1200
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 15600
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 138096
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 211536
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.3171048e+07
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 483130
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 851968
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 851968
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 3.3404168e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 6
# HELP http_client_in_flight_requests A gauge of in-flight requests for the wrapped client.
# TYPE http_client_in_flight_requests gauge
http_client_in_flight_requests{client="k8s"} 0
# HELP http_client_request_latency_seconds A histogram of request latencies.
# TYPE http_client_request_latency_seconds histogram
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.01"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.02"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.03"} 2
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.04"} 3
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.05"} 4
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.1"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.2"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.30000000000000004"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.4"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="0.5"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="1"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="2"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="3"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="4"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="5"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="10"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="20"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="30"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="40"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="50"} 7
http_client_request_latency_seconds_bucket{client="k8s",code="201",method="post",le="+Inf"} 7
http_client_request_latency_seconds_sum{client="k8s",code="201",method="post"} 0.294390324
http_client_request_latency_seconds_count{client="k8s",code="201",method="post"} 7
# HELP http_client_requests_total A counter for requests from the wrapped client.
# TYPE http_client_requests_total counter
http_client_requests_total{client="k8s",code="201",method="post"} 7
# HELP job_cache_size Number of items in the client-go job cache
# TYPE job_cache_size gauge
job_cache_size{cluster="local"} 0
# HELP namespace_cache_size Number of items in the client-go namespace cache
# TYPE namespace_cache_size gauge
namespace_cache_size{cluster="local"} 8
# HELP pod_cache_size Number of items in the client-go pod cache
# TYPE pod_cache_size gauge
pod_cache_size{cluster="local"} 52
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.74
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 12
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 5.1888128e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.69481831305e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 7.80316672e+08
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 12
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
# HELP proxy_inject_admission_requests_total A counter for number of admission requests to proxy injector.
# TYPE proxy_inject_admission_requests_total counter
proxy_inject_admission_requests_total{access_log="",admin_port="",annotation_at="",control_port="",default_inbound_policy="",enable_debug_sidecar="",enable_external_profiles="",image_pull_policy="",inbound_port="",init_image="",init_image_version="",namespace="linkerd-viz",opaque_ports="",outbound_port="",owner_kind="",pod_inbound_ports="",proxy_await="",proxy_cpu_limit="",proxy_cpu_request="",proxy_ephemeral_storage_limit="",proxy_ephemeral_storage_request="",proxy_image="",proxy_inbound_connect_timeout="",proxy_inbound_discovery_cache_unused_timeout="",proxy_log_format="",proxy_log_level="",proxy_memory_limit="",proxy_memory_request="",proxy_outbound_connect_timeout="",proxy_outbound_discovery_cache_unused_timeout="",proxy_require_identity_inbound_ports="",proxy_uid="",proxy_version="",shutdown_grace_period="",skip_inbound_ports="",skip_outbound_ports="",skip_subnets=""} 5
proxy_inject_admission_requests_total{access_log="",admin_port="",annotation_at="",control_port="",default_inbound_policy="",enable_debug_sidecar="",enable_external_profiles="",image_pull_policy="",inbound_port="",init_image="",init_image_version="",namespace="linkerd-viz",opaque_ports="",outbound_port="",owner_kind="job",pod_inbound_ports="",proxy_await="",proxy_cpu_limit="",proxy_cpu_request="",proxy_ephemeral_storage_limit="",proxy_ephemeral_storage_request="",proxy_image="",proxy_inbound_connect_timeout="",proxy_inbound_discovery_cache_unused_timeout="",proxy_log_format="",proxy_log_level="",proxy_memory_limit="",proxy_memory_request="",proxy_outbound_connect_timeout="",proxy_outbound_discovery_cache_unused_timeout="",proxy_require_identity_inbound_ports="",proxy_uid="",proxy_version="",shutdown_grace_period="",skip_inbound_ports="",skip_outbound_ports="",skip_subnets=""} 1
proxy_inject_admission_requests_total{access_log="",admin_port="",annotation_at="workload",control_port="",default_inbound_policy="",enable_debug_sidecar="",enable_external_profiles="",image_pull_policy="",inbound_port="",init_image="",init_image_version="",namespace="apps-dev",opaque_ports="",outbound_port="",owner_kind="deployment",pod_inbound_ports="",proxy_await="",proxy_cpu_limit="",proxy_cpu_request="",proxy_ephemeral_storage_limit="",proxy_ephemeral_storage_request="",proxy_image="",proxy_inbound_connect_timeout="",proxy_inbound_discovery_cache_unused_timeout="",proxy_log_format="",proxy_log_level="",proxy_memory_limit="",proxy_memory_request="",proxy_outbound_connect_timeout="",proxy_outbound_discovery_cache_unused_timeout="",proxy_require_identity_inbound_ports="",proxy_uid="",proxy_version="",shutdown_grace_period="",skip_inbound_ports="",skip_outbound_ports="",skip_subnets=""} 1
proxy_inject_admission_requests_total{access_log="",admin_port="",annotation_at="workload",control_port="",default_inbound_policy="",enable_debug_sidecar="",enable_external_profiles="",image_pull_policy="",inbound_port="",init_image="",init_image_version="",namespace="linkerd-viz",opaque_ports="",outbound_port="",owner_kind="deployment",pod_inbound_ports="",proxy_await="",proxy_cpu_limit="",proxy_cpu_request="",proxy_ephemeral_storage_limit="",proxy_ephemeral_storage_request="",proxy_image="",proxy_inbound_connect_timeout="",proxy_inbound_discovery_cache_unused_timeout="",proxy_log_format="",proxy_log_level="",proxy_memory_limit="",proxy_memory_request="",proxy_outbound_connect_timeout="",proxy_outbound_discovery_cache_unused_timeout="",proxy_require_identity_inbound_ports="",proxy_uid="",proxy_version="",shutdown_grace_period="",skip_inbound_ports="",skip_outbound_ports="",skip_subnets=""} 5
# HELP proxy_inject_admission_responses_total A counter for number of admission responses from proxy injector.
# TYPE proxy_inject_admission_responses_total counter
proxy_inject_admission_responses_total{access_log="",admin_port="",annotation_at="",control_port="",default_inbound_policy="",enable_debug_sidecar="",enable_external_profiles="",image_pull_policy="",inbound_port="",init_image="",init_image_version="",namespace="linkerd-viz",opaque_ports="",outbound_port="",owner_kind="job",pod_inbound_ports="",proxy_await="",proxy_cpu_limit="",proxy_cpu_request="",proxy_ephemeral_storage_limit="",proxy_ephemeral_storage_request="",proxy_image="",proxy_inbound_connect_timeout="",proxy_inbound_discovery_cache_unused_timeout="",proxy_log_format="",proxy_log_level="",proxy_memory_limit="",proxy_memory_request="",proxy_outbound_connect_timeout="",proxy_outbound_discovery_cache_unused_timeout="",proxy_require_identity_inbound_ports="",proxy_uid="",proxy_version="",shutdown_grace_period="",skip="true",skip_inbound_ports="",skip_outbound_ports="",skip_reason="injection_enable_annotation_absent",skip_subnets=""} 1
proxy_inject_admission_responses_total{access_log="",admin_port="",annotation_at="workload",control_port="",default_inbound_policy="",enable_debug_sidecar="",enable_external_profiles="",image_pull_policy="",inbound_port="",init_image="",init_image_version="",namespace="apps-dev",opaque_ports="",outbound_port="",owner_kind="deployment",pod_inbound_ports="",proxy_await="",proxy_cpu_limit="",proxy_cpu_request="",proxy_ephemeral_storage_limit="",proxy_ephemeral_storage_request="",proxy_image="",proxy_inbound_connect_timeout="",proxy_inbound_discovery_cache_unused_timeout="",proxy_log_format="",proxy_log_level="",proxy_memory_limit="",proxy_memory_request="",proxy_outbound_connect_timeout="",proxy_outbound_discovery_cache_unused_timeout="",proxy_require_identity_inbound_ports="",proxy_uid="",proxy_version="",shutdown_grace_period="",skip="false",skip_inbound_ports="",skip_outbound_ports="",skip_reason="",skip_subnets=""} 1
proxy_inject_admission_responses_total{access_log="",admin_port="",annotation_at="workload",control_port="",default_inbound_policy="",enable_debug_sidecar="",enable_external_profiles="",image_pull_policy="",inbound_port="",init_image="",init_image_version="",namespace="linkerd-viz",opaque_ports="",outbound_port="",owner_kind="deployment",pod_inbound_ports="",proxy_await="",proxy_cpu_limit="",proxy_cpu_request="",proxy_ephemeral_storage_limit="",proxy_ephemeral_storage_request="",proxy_image="",proxy_inbound_connect_timeout="",proxy_inbound_discovery_cache_unused_timeout="",proxy_log_format="",proxy_log_level="",proxy_memory_limit="",proxy_memory_request="",proxy_outbound_connect_timeout="",proxy_outbound_discovery_cache_unused_timeout="",proxy_require_identity_inbound_ports="",proxy_uid="",proxy_version="",shutdown_grace_period="",skip="false",skip_inbound_ports="",skip_outbound_ports="",skip_reason="",skip_subnets=""} 5
# HELP replicaset_cache_size Number of items in the client-go replicaset cache
# TYPE replicaset_cache_size gauge
replicaset_cache_size{cluster="local"} 87
# HELP replicationcontroller_cache_size Number of items in the client-go replicationcontroller cache
# TYPE replicationcontroller_cache_size gauge
replicationcontroller_cache_size{cluster="local"} 0
# HELP statefulset_cache_size Number of items in the client-go statefulset cache
# TYPE statefulset_cache_size gauge
statefulset_cache_size{cluster="local"} 0

Updated 1:

Here is a code to reproduce errors. The code is Pulumi python to deploy linkerd to aws eks.

https://github.com/omidraha/pulumi_example/blob/main/linkerd/linkerd.py https://github.com/omidraha/pulumi_example/blob/main/__main__.py https://github.com/omidraha/pulumi_example/tree/main/linkerd

Updated 2:

I successfully deployed Linkerd . The deployment code using Python and Pulumi is available below: https://github.com/omidraha/pulumi_example/tree/main/linkerd

kflynn commented 1 year ago

This seems like two distinct situations that look the same, but aren't:

@chris-ng-scmp and @Nilubkal, I think that you two are probably seeing the bug where the root problem is a certificate failure, as described in #11237. Could y'all take a look there and confirm?

Everyone else: is this just a startup problem? or are you finding that the mesh never gets running?

stale[bot] commented 9 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.