k8snetworkplumbingwg / multus-cni

A CNI meta-plugin for multi-homed pods in Kubernetes
Apache License 2.0
2.27k stars 577 forks source link

Unable to access kubernetes internal dns resulution when k8s.v1.cni.cncf.io/networks added to pod #1294

Closed antoniomerlin closed 4 weeks ago

antoniomerlin commented 1 month ago

When i add multus specific annotation to a pod then those pods unable to resolve kubernetes internal dns resulution, so those pods are not able to directly access kubernetes defined services and endpoints. But that application is accessible through the multus defined IP on external network but unable to access coredns.

Is something i am missing regarding this multus config, i basically want to setup that applications inside cluster use kubernetes dns resulution for interpod communication and that application is also accessible to external network using multus macvlan provided IP.

I basically do not want application inside cluster to depend on external network for interpod communication.

Pod using defualt dnspolicy ClusterFirst.

Installed multus thin plugin using this link

kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset.yml

Calico config:

{
  "name": "k8s-pod-network",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "calico",
      "log_level": "info",
      "log_file_path": "/var/log/calico/cni/cni.log",
      "datastore_type": "kubernetes",
      "nodename": "__KUBERNETES_NODE_NAME__",
      "mtu": __CNI_MTU__,
      "ipam": {
          "type": "calico-ipam"
      },
      "container_settings": {
          "allow_ip_forwarding": true
      },
      "policy": {
          "type": "k8s"
      },
      "kubernetes": {
          "kubeconfig": "__KUBECONFIG_FILEPATH__"
      }
    },
    {
      "type": "portmap",
      "snat": true,
      "capabilities": {"portMappings": true}
    },
    {
      "type": "bandwidth",
      "capabilities": {"bandwidth": true}
    }
  ]
}

Pod Template before adding annotations

apiVersion: v1
kind: Pod
metadata:
  name: multitool
spec:
  containers:
    - name: multitool
      image: praqma/network-multitool
      command:
        - sleep
        - '3600'

image

NetworkAttachmentDefinition

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan
spec:
  config: >-
    { "cniVersion": "0.3.1", "name": "macvlan", "type": "macvlan", "mode":
    "bridge", "master": "ens18", "ipam": { "type": "host-local", "subnet":
    "193.169.1.0/24", "rangeStart": "193.169.1.100", "rangeEnd":
    "193.169.1.150", "routes": [ { "dst": "0.0.0.0/0" } ], "gateway":
    "193.169.1.1" } }

Pod template after adding annotations:

apiVersion: v1
kind: Pod
metadata:
  name: multitool
  annotations:
    k8s.v1.cni.cncf.io/networks: >-
          [{"name": "macvlan"}]
spec:
  containers:
    - name: multitool
      image: praqma/network-multitool
      command:
        - sleep
        - '3600'

image

Environment

dougbtv commented 1 month ago

Looks like you might be changing your default route with the macvlan config?

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan
spec:
  config: >-
    { "cniVersion": "0.3.1", "name": "macvlan", "type": "macvlan", "mode":
    "bridge", "master": "ens18", "ipam": { "type": "host-local", "subnet":
    "193.169.1.0/24", "rangeStart": "193.169.1.100", "rangeEnd":
    "193.169.1.150", "routes": [ { "dst": "0.0.0.0/0" } ], "gateway":
    "193.169.1.1" } }

Maybe try instead without the gateway change? that's probably making it so you don't hit the DNS server.

also try a ip route from inside the pod (e.g. kubectl exec ...)

antoniomerlin commented 1 month ago

updated the NetworkAttachedDefinition still same error

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan
spec:
  config: >-
    { "cniVersion": "0.3.1", "name": "macvlan", "type": "macvlan", "mode":
    "bridge", "master": "ens18", "ipam": { "type": "host-local", "subnet":
    "193.169.1.0/24", "rangeStart": "193.169.1.100", "rangeEnd":
    "193.169.1.150", "routes": [ { "dst": "0.0.0.0/0" } ] } }

Screenshot from 2024-06-06 20-28-35

antoniomerlin commented 4 weeks ago

Able to resolve it using #847 manually adding route to pod of k3s service CIDR.