kubernetes-sigs / kind

Kubernetes IN Docker - local clusters for testing Kubernetes
https://kind.sigs.k8s.io/
Apache License 2.0
13.5k stars 1.56k forks source link

Nameserver not set correctly in resolv.conf #3791

Open jhoogstraat opened 1 day ago

jhoogstraat commented 1 day ago

What happened:

For new containers in the cluster the dns setup does not work correctly. Instead of the coredns service ip, a more "local network"-looking address is used. The search is completly missing. This prevents communication to other pods.

What you expected to happen:

Containers can communicate via dns with containers on other nodes and pods.

How to reproduce it (as minimally and precisely as possible):

Command used: kind create cluster --config=kind-cluster.yaml

with kind-cluster.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4

name: dev-cloud
networking:
  ipFamily: ipv4
nodes:
  - role: control-plane
    # ingress-controller uses nodeSelector "ingress-ready" to force its pod to this node.
    kubeadmConfigPatches:
      - |
        kind: InitConfiguration
        nodeRegistration:
          kubeletExtraArgs:
            node-labels: "ingress-ready=true"
    extraPortMappings:
      - containerPort: 30001
        hostPort: 80
      - containerPort: 30002
        hostPort: 8080
    extraMounts:
      - hostPath: data/
        containerPath: /data/
  - role: worker
    extraMounts:
      - hostPath: data/
        containerPath: /data/
  - role: worker
    extraMounts:
      - hostPath: data/
        containerPath: /data/

Anything else we need to know?: kind-logs.zip - I started a debug container and printed the content of resolv.conf.

Environment:

Server: Containers: 4 Running: 4 Paused: 0 Stopped: 0 Images: 37 Server Version: 27.3.1 Storage Driver: overlayfs driver-type: io.containerd.snapshotter.v1 Logging Driver: json-file Cgroup Driver: cgroupfs Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: 472731909fa34bd7bc9c087e4c27943f9835f111 runc version: v1.1.13-0-g58aa920 init version: de40ad0 Security Options: seccomp Profile: unconfined cgroupns Kernel Version: 6.10.11-linuxkit Operating System: Docker Desktop OSType: linux Architecture: aarch64 CPUs: 4 Total Memory: 9.705GiB Name: docker-desktop ID: 2e072ff7-0f10-4c9c-94a6-2863dadc00a4 Docker Root Dir: /var/lib/docker Debug Mode: false HTTP Proxy: http.docker.internal:3128 HTTPS Proxy: http.docker.internal:3128 No Proxy: hubproxy.docker.internal Labels: com.docker.desktop.address=unix:///Users/XXXX/Library/Containers/com.docker.docker/Data/docker-cli.sock Experimental: false Insecure Registries: 192.168.1.81:5000 hubproxy.docker.internal:5555 127.0.0.0/8 Live Restore Enabled: false

WARNING: daemon is not using the default seccomp profile

- OS (e.g. from `/etc/os-release`): MBP with MacOS 15.1 (24B83) and Docker Desktop
- Kubernetes version: (use `kubectl version`): 

Client Version: v1.30.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.31.2


- Any proxies or other special environment settings?:
BenTheElder commented 1 day ago

It seems unlikely that the config above is the minimum reproducer, which also needs some actual pods we can run and some more concrete ways that this deviates from any other kubernetes cluster.

For new containers in the cluster the dns setup does not work correctly.

containers => pods?

what pods? how are they configured?

this was working for old containers? what old containers? what is different with the old and the new?

Instead of the coredns service ip, a more "local network"-looking address is used.

are they by any chance host network pods?

The search is completly missing.

The in-cluster search settings are kubernetes, not kind. The host search parameters are another story.

BenTheElder commented 12 hours ago

/triage needs-information