kubernetes-sigs / kind

Kubernetes IN Docker - local clusters for testing Kubernetes
https://kind.sigs.k8s.io/
Apache License 2.0
13.29k stars 1.54k forks source link

DNS resolution unreliable in v0.24.0 when using network policies #3713

Open jglick opened 4 weeks ago

jglick commented 4 weeks ago

I tried using the new built-in NetworkPolicy support in v0.24.0 (#842 / #3612) instead of Calico. It did not work well, apparently due to DNS problems.

What happened:

Requests to either pods inside the cluster by service name, or external hosts, were often delayed or failed due to DNS failures.

What you expected to happen:

Requests should succeed immediately, up to the responsiveness of the service.

How to reproduce it (as minimally and precisely as possible):

Will try to put together a minimal test case. In the meantime:

In my original setup a cluster was created with

networking:
  disableDefaultCNI: true
  podSubnet: 192.168.0.0/16

and I then installed Calico using

kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/calico.yaml
kubectl -n kube-system set env daemonset/calico-node FELIX_IGNORELOOSERPF=true

and defined some network policies. This worked fine, including after creating a fresh cluster using v0.24.0; for example from a pod I could access another pod in the same namespace using a service name

curl -iv http://othersvc/

provided of course that othersvc permitted that incoming connection; the command prints content essentially instantly. Contacting external (Internet) services was also as reliable as the site itself.

After deleting the Calico setup and rerunning the exact same scenario, sometimes the curl command works instantly as before; sometimes it fails and claims the othersvc host could not be found; other times it works, but only after a delay of several seconds. Contacting hosts on the Internet is also unreliable, sometimes giving a hostname resolution error. For example

curl -Iv http://www.google.com/

sometimes works and sometimes does not

curl: (6) Could not resolve host: www.google.com

whereas

curl -Iv http://74.125.21.99/

works reliably.

Environment:

Kind v024.0, Docker 27.1.2, Ubuntu Noble ```console $ kind version kind v0.24.0 go1.22.6 linux/amd64 $ docker info Client: Docker Engine - Community Version: 27.1.2 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.16.2 Path: /usr/libexec/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.29.1 Path: /usr/libexec/docker/cli-plugins/docker-compose Server: Containers: 2 Running: 0 Paused: 0 Stopped: 2 Images: 22 Server Version: 27.1.2 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: systemd Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: 8fc6bcff51318944179630522a095cc9dbf9f353 runc version: v1.1.13-0-g58aa920 init version: de40ad0 Security Options: apparmor seccomp Profile: builtin cgroupns Kernel Version: 6.8.0-40-generic Operating System: Ubuntu 24.04 LTS OSType: linux Architecture: x86_64 CPUs: 28 Total Memory: 62.44GiB Name: tanguy ID: 4de98847-7e1b-4640-8bf3-93112cb19188 Docker Root Dir: /var/lib/docker Debug Mode: false Username: jglick Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false $ cat /etc/os-release PRETTY_NAME="Ubuntu 24.04 LTS" NAME="Ubuntu" VERSION_ID="24.04" VERSION="24.04 LTS (Noble Numbat)" VERSION_CODENAME=noble ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=noble LOGO=ubuntu-logo $ kubectl version Client Version: v1.30.3 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 The connection to the server localhost:8080 was refused - did you specify the right host or port? ```
BenTheElder commented 4 weeks ago

/assign @aojea

jglick commented 4 weeks ago

(Just filing this early since 0.24.0 just came out. Will try to create a self-contained test case. I have not yet checked whether the same problem affects early Kind releases using the built-in kindnet; and whether the problem affects clusters not using NetworkPolicy at all.)

BenTheElder commented 4 weeks ago

I wonder if this only applies to external DNS, or in some specific environment (kernel / loaded modules maybe?)

We are making a lot of in-cluster DNS queries in CI and I haven't noticed an increase in flakiness, but we're also not actually running network policy tests yet and the majority of tests don't make egress requests (excluding pulling images at the node level).

aojea commented 4 weeks ago

hmm, it seems related to this https://github.com/kubernetes-sigs/kube-network-policies/issues/12

@jglick appreciate if you give me a reproducer

jglick commented 4 weeks ago

The problem affects Java processes, not just curl. Did not yet try installing tools like nslookup or dig.

Not reproducible just running kind create cluster and running e.g. curlimages/curl.

In my full reproducer, the problem is fixed simply by declining to create any NetworkPolicys.

I will try to start creating a minimal reproducer I can share.

aojea commented 4 weeks ago

The problem affects Java processes, not just curl

hmm, then it maybe related to the way the java resolver performs the dns resolution , IIRC it uses a stub resolver and not libc https://biriukov.dev/docs/resolver-dual-stack-application/8-stub-resolvers-in-languages/#84-java-and-netty

once we have the reproducer we can get to the bottom, appreciate the effort

BenTheElder commented 4 weeks ago

@aojea java in addition to curl, presumably curl is using libc (though we don't know which build and which libc)

BenTheElder commented 4 weeks ago

Might be glibc vs muslc

jglick commented 3 weeks ago

OK, I have a reproducer. Simple kind create cluster then apply the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: x
  labels:
    app: x
spec:
  replicas: 1
  selector:
    matchLabels:
      app: x
  template:
    metadata:
      labels:
        app: x
    spec:
      containers:
      - name: x
        image: curlimages/curl
        command:
        - sleep
        args:
        - infinity
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: x
spec:
  podSelector:
    matchLabels:
      app: x
  ingress:
  - ports:
    - port: 9999
      protocol: TCP
    from:
    - podSelector:
        matchLabels:
          app: 'y'

And then repeatedly run

kubectl exec deploy/x -- time nslookup www.google.com

Sometimes this will succeed instantly; most times it will wait precisely 2.5s before succeeding. If you

kubectl delete networkpolicies.networking.k8s.io x

then the lookup begins reliably completing immediately. Note that the policy is controlling ingress to the test pod while the affected action should involve only outbound connections.

aojea commented 3 weeks ago

@jglick can you check if at least one of the coredns pods is in the same node than the affected pod?

jglick commented 3 weeks ago

the same node

WDYM? There is only one node here. From a freshly created cluster,

$ kubectl get po -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-6f6b679f8f-4kzpv                     1/1     Running   0          29s
kube-system          coredns-6f6b679f8f-ddrwm                     1/1     Running   0          29s
kube-system          etcd-kind-control-plane                      1/1     Running   0          36s
kube-system          kindnet-bw8b7                                1/1     Running   0          29s
kube-system          kube-apiserver-kind-control-plane            1/1     Running   0          35s
kube-system          kube-controller-manager-kind-control-plane   1/1     Running   0          35s
kube-system          kube-proxy-kxdmq                             1/1     Running   0          29s
kube-system          kube-scheduler-kind-control-plane            1/1     Running   0          35s
local-path-storage   local-path-provisioner-57c5987fd4-5dm47      1/1     Running   0          29s

or

$ kubectl get nodes
NAME                 STATUS   ROLES           AGE   VERSION
kind-control-plane   Ready    control-plane   33s   v1.31.0

after a few seconds as expected.

aojea commented 3 weeks ago

I see, I think the problem happen when the destination is in the same node ... smells as a kernel bug

aojea commented 3 weeks ago

can you edit the coredns deployment to set only one replica and try again?

kubect edit deployment coredns -n kube-system

jglick commented 3 weeks ago

Indeed after

kubectl -n kube-system scale deployment coredns --replicas 1

the problem seems to go away.

aojea commented 2 weeks ago

It seems a kernel bug, I will track it in the original project https://github.com/kubernetes-sigs/kube-network-policies/issues/12#issuecomment-2308245173 , in the meantime as a workaround

kubectl -n kube-system scale deployment coredns --replicas 1

sorry for not having a better answer yet and thanks for reporting feedback

jglick commented 2 weeks ago

sorry for not having a better answer yet

Not at all! The workaround seems quite straightforward. For that matter, why not set this deployment to one replica by default in Kind, or at least single-node Kind? It does not seem like it really needs to be highly available or scalable in a test cluster.

csuich2 commented 2 weeks ago

Edit: Not related to this issue. See comment below.

I'm running into the same issue but scaling coredns down to a single replica does not fix the issue. In my case the issue comes from istio-proxy sidecars failing with:

2024-08-26T14:12:46.176927Z    warning    envoy config external/envoy/source/common/config/grpc_stream.h:191    StreamAggregatedResources gRPC config stream to xds-grpc closed since 103s ago: 14, connection error:
 desc = "transport: Error while dialing: dial tcp: lookup istiod.istio-system.svc on 10.96.0.10:53: read udp 10.244.0.201:47082->10.96.0.10:53: i/o timeout"    thread=21

I'm running a single node Kind cluster and some, but not all, istio-proxy containers are failing with this error.

aojea commented 2 weeks ago

I'm running into the same issue

there are a lot of subtle parameters that we need to validate to confirm is the same issue.

Can you paste your applied network policies manifests and confirm your kind version and kind image?

csuich2 commented 2 weeks ago

Sorry about that - this is our fault. We didn't realize that OOTB NetworkPolicy enforcement was added in v0.24.0 and some of our existing NetworkPolicies started blocking access to DNS once we upgraded to v0.24.0 from v0.23.0.

PEBKAC.

BenTheElder commented 1 week ago

So in summary we have some network policies that are now taking effect, and there's no bug?

If not, please re-open. /close

k8s-ci-robot commented 1 week ago

@BenTheElder: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kind/issues/3713#issuecomment-2329801385): >So in summary we have some network policies that are now taking effect, and there's no bug? > >If not, please re-open. >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
jglick commented 1 week ago

There is a bug, as stated in the original description. There is a workaround but users should not need to dig for it. Can Kind just switch to one replica by default, as in https://github.com/kubernetes-sigs/kind/issues/3713#issuecomment-2308412210?

aojea commented 1 week ago

There is a bug, as stated in the original description. There is a workaround but users should not need to dig for it. Can Kind just switch to one replica by default, as in #3713 (comment)?

I'm talking with the kernel maintainers to see if we can fix it in the root, if it takes time we'll try to figure out other workarounds, but I prefer to not use kind to compensate other components bugs, so in order of preference is :)

  1. kernel 2. kube-network-policies 3. kindnet 4. kind

We have a reproducer so I expect this or the next week we can have more progress https://bugzilla.netfilter.org/show_bug.cgi?id=1766

Sorry for the inconvenience