kubeovn / kube-ovn

A Bridge between SDN and Cloud Native (Project under CNCF)
https://kubeovn.github.io/docs/stable/en/
Apache License 2.0
1.87k stars 433 forks source link

[BUG] Cilium and VPC-DNS #4201

Open CiraciNicolo opened 1 week ago

CiraciNicolo commented 1 week ago

Kube-OVN Version

1.12.17

Kubernetes Version

v1.29.5+k3s1

Operation-system/Kernel Version

"Ubuntu 22.04.4 LTS" Linux host 5.15.0-112-generic #122-Ubuntu SMP Thu May 23 07:48:21 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Description

Using KubeOVN in chaining with Cilium, therefore with KubeOVN's LB disabled renders the vpn-dns feature unusable, since it is explicitly disabled here

Steps To Reproduce

  1. Install Cilium and then KubeOVN as reported here
  2. Create a VPC
  3. Enable VPC-DNS

Current Behavior

VPC-DNS is not working since OVN LB is disabled and VPC-DNS are not scheduled, re-enabling LB will schedule the pods but SVC are not reachable with VIP ip

Expected Behavior

VPC-DNS should work

CiraciNicolo commented 1 week ago

For the sake of completeness, these are the configuration of both cilium and kubeovn. The cluster is a single k3s node, since this is a POC.

  1. K3S systemd
    ExecStart=/usr/local/bin/k3s \
    server \
    --disable=servicelb \
    --disable=traefik \
    --disable=metrics-server \
    --flannel-backend=none \
    --disable-kube-proxy \
    --disable-network-policy \
    --disable-helm-controller \
    --disable-cloud-controller \
    --cluster-cidr=10.69.0.0/16 \
    --service-cidr=10.96.0.0/12 \
  2. Chaining configuration
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: cni-configuration
    data:
    cni-config: |-
    {
      "name": "generic-veth",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "kube-ovn",
          "server_socket": "/run/openvswitch/kube-ovn-daemon.sock"
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {
            "portMappings": true
          }
        },
        {
          "type": "cilium-cni",
          "chaining-mode": "generic-veth"
        }
      ]
    }
  3. Cilium helm values
    cluster:
    name: root
    id: 0
    cni:
    chainingMode: generic-veth
    chainingTarget: kube-ovn
    customConf: true
    configMap: cni-configuration
    devices: "eth+ ovn0" ## https://github.com/kubeovn/kube-ovn/issues/4089#issue-2317593927
    enableIPv4Masquerade: false
    enableIdentityMark: false
    kubeProxyReplacement: true
    hubble:
    relay:
    enabled: true
    ui:
    enabled: true
    ipam:
    mode: cluster-pool
    operator:
    clusterPoolIPv4PodCIDRList: 10.69.0.0/16
    ipv4:
    enabled: true
    ipv6:
    enabled: false
    k8sServiceHost: 172.16.150.111
    k8sServicePort: 6443
    operator:
    replicas: 1
    routingMode: "native"
    sessionAffinity: true
    socketLB:
    hostNamespaceOnly: true ## https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/#socket-loadbalancer-bypass-in-pod-namespace
    version: 1.15.6
  4. KubeOVN helm values
    global:
    registry:
    address: docker.elmec.com/proxy-cache/kubeovn
    images:
    kubeovn:
      tag: v1.12.17
    cni_conf:
    CNI_CONFIG_PRIORITY: "10"
    func:
    ENABLE_NP: false
    ENABLE_TPROXY: true
    ipv4:
    POD_CIDR: "10.69.0.0/16"
    POD_GATEWAY: "10.69.0.1"
    SVC_CIDR: "10.96.0.0/12"
    JOIN_CIDR: "100.69.0.0/16"
    PINGER_EXTERNAL_ADDRESS: "1.1.1.1"
CiraciNicolo commented 1 week ago

LB is created as output of ovn-nbctl lb-list

61dd9fec-032a-4a32-a7b6-d3959c688652    vpc-alpha-tcp-lo    tcp        10.96.0.10:53           10.100.0.2:53
                                                            tcp        10.96.0.10:9153         10.100.0.2:9153
2feecfab-8c8d-431c-857a-37ee4ea94085    vpc-alpha-udp-lo    udp        10.96.0.10:53           10.100.0.2:53

Also, I did not added a VPC NAT Gateway. Are GW needed for VPC DNS?

bobz965 commented 1 week ago

VPC DNS does not need VPC NAT Gateway. VPC DNS runs its coredns deployment like the way of VPC NAT Gateway.

please refer the doc: https://kubeovn.github.io/docs/v1.13.x/en/advance/vpc-internal-dns/?h=vpc

CiraciNicolo commented 1 week ago

Hi! OK thanks for the clarification about NATGW. Anyway I cannot resolve DNS inside the VPC:

Simple DNS resolution

root@c4i-bastion:/home/ubuntu# kubectl get pod -n alpha dnsutils -o wide
NAME       READY   STATUS    RESTARTS   AGE     IP           NODE          NOMINATED NODE   READINESS GATES
dnsutils   1/1     Running   0          3m54s   10.100.0.6   c4i-bastion   <none>           <none>
root@c4i-bastion:/home/ubuntu# kubectl get slr
NAME            VIP          PORT(S)                  SERVICE                         AGE
vpc-dns-alpha   10.96.0.10   53/UDP,53/TCP,9153/TCP   kube-system/slr-vpc-dns-alpha   15h
root@c4i-bastion:/home/ubuntu# kubectl exec -tn alpha dnsutils -- nslookup kubernetes.default.svc.cluster.local 10.96.0.10
;; connection timed out; no servers could be reached

command terminated with exit code 1

TPCDUMP

root@c4i-bastion:~# tcpdump -i any host 10.100.0.6
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
09:19:34.441612 f57883dadcf1_h P   IP 10.100.0.6.60218 > 10.96.0.10.domain: 23493+ A? kubernetes.default.svc.cluster.local.alpha.svc.cluster.local. (78)
09:19:39.441801 f57883dadcf1_h P   IP 10.100.0.6.60218 > 10.96.0.10.domain: 23493+ A? kubernetes.default.svc.cluster.local.alpha.svc.cluster.local. (78)
09:19:39.645206 f57883dadcf1_h P   ARP, Request who-has 10.100.0.1 tell 10.100.0.6, length 28
09:19:39.645903 f57883dadcf1_h Out ARP, Reply 10.100.0.1 is-at 0a:90:08:c5:d5:d9 (oui Unknown), length 28
09:19:44.442086 f57883dadcf1_h P   IP 10.100.0.6.60218 > 10.96.0.10.domain: 23493+ A? kubernetes.default.svc.cluster.local.alpha.svc.cluster.local. (78)
CiraciNicolo commented 1 week ago

Inspecting the traffic with ovs-tcpdump I see that the communication go towards 10.69.0.3 that is the Pod IP of the standalone CoreDNS. So it seems that the SLR is not applied.

bobz965 commented 1 week ago

please attach your vpc dns configmap and the coredns deployment pod.

CiraciNicolo commented 1 week ago

I don't think the issue is the deployment, because if use nslookup specifying the ip of the VPC-DNS pod everything works fine:

root@c4i-bastion:/home/ubuntu# kubectl get pod -n kube-system vpc-dns-alpha-5b5c864c98-jnp2w -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP           NODE          NOMINATED NODE   READINESS GATES
vpc-dns-alpha-5b5c864c98-jnp2w   1/1     Running   0          6h50m   10.100.0.7   c4i-bastion   <none>           <none>
root@c4i-bastion:/home/ubuntu# kubectl exec -tn alpha dnsutils -- nslookup kubernetes.default.svc.cluster.local 10.100.0.7
Server:     10.100.0.7
Address:    10.100.0.7#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1

root@c4i-bastion:/home/ubuntu# kubectl exec -tn alpha dnsutils -- nslookup google.it 10.100.0.7
Server:     10.100.0.7
Address:    10.100.0.7#53

Name:   google.it
Address: 142.250.180.131

Anyway, this is the VPC-DNS CR and VPC-DNS deployment:

root@c4i-bastion:/home/ubuntu# kubectl get vpc-dnses.kubeovn.io alpha -o yaml
apiVersion: kubeovn.io/v1
kind: VpcDns
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"kubeovn.io/v1","kind":"VpcDns","metadata":{"annotations":{},"name":"alpha"},"spec":{"replicas":1,"subnet":"alpha-default","vpc":"alpha"}}
  creationTimestamp: "2024-06-21T08:04:24Z"
  generation: 1
  name: alpha
  resourceVersion: "62483"
  uid: 59819688-2c4a-4fe6-a5d0-c7a249fe0635
spec:
  replicas: 1
  subnet: alpha-default
  vpc: alpha
status:
  active: true
root@c4i-bastion:/home/ubuntu# kubectl get pod -n kube-system vpc-dns-alpha-5b5c864c98-jnp2w -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "generic-veth",
          "interface": "eth0",
          "ips": [
              "10.100.0.7"
          ],
          "mac": "8a:65:e3:36:1f:b4",
          "default": true,
          "dns": {},
          "gateway": [
              "10.100.0.1"
          ]
      },{
          "name": "default/ovn-nad",
          "interface": "net1",
          "ips": [
              "10.69.0.15"
          ],
          "mac": "6e:54:9e:49:7f:99",
          "dns": {}
      }]
    k8s.v1.cni.cncf.io/networks: default/ovn-nad
    ovn-nad.default.ovn.kubernetes.io/allocated: "true"
    ovn-nad.default.ovn.kubernetes.io/cidr: 10.69.0.0/16
    ovn-nad.default.ovn.kubernetes.io/gateway: 10.69.0.1
    ovn-nad.default.ovn.kubernetes.io/ip_address: 10.69.0.15
    ovn-nad.default.ovn.kubernetes.io/logical_router: ovn-cluster
    ovn-nad.default.ovn.kubernetes.io/logical_switch: ovn-default
    ovn-nad.default.ovn.kubernetes.io/mac_address: 6e:54:9e:49:7f:99
    ovn-nad.default.ovn.kubernetes.io/pod_nic_type: veth-pair
    ovn-nad.default.ovn.kubernetes.io/routed: "true"
    ovn.kubernetes.io/allocated: "true"
    ovn.kubernetes.io/cidr: 10.100.0.0/24
    ovn.kubernetes.io/gateway: 10.100.0.1
    ovn.kubernetes.io/ip_address: 10.100.0.7
    ovn.kubernetes.io/logical_router: alpha
    ovn.kubernetes.io/logical_switch: alpha-default
    ovn.kubernetes.io/mac_address: 8a:65:e3:36:1f:b4
    ovn.kubernetes.io/pod_nic_type: veth-pair
    ovn.kubernetes.io/routed: "true"
  creationTimestamp: "2024-06-21T08:04:24Z"
  generateName: vpc-dns-alpha-5b5c864c98-
  labels:
    k8s-app: vpc-dns-alpha
    pod-template-hash: 5b5c864c98
  name: vpc-dns-alpha-5b5c864c98-jnp2w
  namespace: kube-system
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: vpc-dns-alpha-5b5c864c98
    uid: fa50730d-c305-4aee-b4fc-4cf992a82a28
  resourceVersion: "62526"
  uid: dd3eeeb2-80fd-4230-9d4e-eac5bed14d7a
spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: k8s-app
              operator: In
              values:
              - vpc-dns-alpha
          topologyKey: kubernetes.io/hostname
        weight: 100
  containers:
  - args:
    - -conf
    - /etc/coredns/Corefile
    image: rancher/mirrored-coredns-coredns:1.10.1
    imagePullPolicy: IfNotPresent
    name: coredns
    ports:
    - containerPort: 53
      name: dns
      protocol: UDP
    - containerPort: 53
      name: dns-tcp
      protocol: TCP
    - containerPort: 9153
      name: metrics
      protocol: TCP
    resources:
      limits:
        memory: 170Mi
      requests:
        cpu: 100m
        memory: 70Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        add:
        - NET_BIND_SERVICE
        drop:
        - all
      readOnlyRootFilesystem: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/coredns
      name: config-volume
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-njmws
      readOnly: true
  dnsPolicy: Default
  enableServiceLinks: true
  initContainers:
  - command:
    - sh
    - -c
    - ip -4 route add 10.96.0.1 via 10.69.0.1 dev net1;ip -4 route add 172.16.150.10
      via 10.69.0.1 dev net1;
    image: docker.elmec.com/proxy-cache/kubeovn/vpc-nat-gateway:v1.12.17
    imagePullPolicy: IfNotPresent
    name: init-route
    resources: {}
    securityContext:
      allowPrivilegeEscalation: true
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-njmws
      readOnly: true
  nodeName: c4i-bastion
  nodeSelector:
    kubernetes.io/os: linux
  preemptionPolicy: PreemptLowerPriority
  priority: 2000000000
  priorityClassName: system-cluster-critical
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: vpc-dns
  serviceAccountName: vpc-dns
  terminationGracePeriodSeconds: 30
  tolerations:
  - key: CriticalAddonsOnly
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - configMap:
      defaultMode: 420
      items:
      - key: Corefile
        path: Corefile
      name: vpc-dns-corefile
    name: config-volume
  - name: kube-api-access-njmws
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2024-06-21T08:04:27Z"
    status: "True"
    type: PodReadyToStartContainers
  - lastProbeTime: null
    lastTransitionTime: "2024-06-21T08:04:27Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2024-06-21T08:04:28Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2024-06-21T08:04:28Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2024-06-21T08:04:24Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://9c5fdf5a195cda078cca2e1708e11911a4a51b8ad1d9f3d0be2c3347b8ea7827
    image: docker.io/rancher/mirrored-coredns-coredns:1.10.1
    imageID: docker.io/rancher/mirrored-coredns-coredns@sha256:a11fafae1f8037cbbd66c5afa40ba2423936b72b4fd50a7034a7e8b955163594
    lastState: {}
    name: coredns
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2024-06-21T08:04:27Z"
  hostIP: 172.16.150.111
  hostIPs:
  - ip: 172.16.150.111
  initContainerStatuses:
  - containerID: containerd://0467fff8b3547d4e928dee54161232b512cfb8112dd2d377e8c12198443c5fb4
    image: docker.elmec.com/proxy-cache/kubeovn/vpc-nat-gateway:v1.12.17
    imageID: docker.elmec.com/proxy-cache/kubeovn/vpc-nat-gateway@sha256:3065824836ae3d7d9e16f2265a23dfd983b9052b51acfb65e0a1b02c4a1e20a0
    lastState: {}
    name: init-route
    ready: true
    restartCount: 0
    started: false
    state:
      terminated:
        containerID: containerd://0467fff8b3547d4e928dee54161232b512cfb8112dd2d377e8c12198443c5fb4
        exitCode: 0
        finishedAt: "2024-06-21T08:04:26Z"
        reason: Completed
        startedAt: "2024-06-21T08:04:26Z"
  phase: Running
  podIP: 10.100.0.7
  podIPs:
  - ip: 10.100.0.7
  qosClass: Burstable
  startTime: "2024-06-21T08:04:24Z"
bobz965 commented 1 week ago

in your info:


61dd9fec-032a-4a32-a7b6-d3959c688652    vpc-alpha-tcp-lo    tcp        10.96.0.10:53           10.100.0.2:53
                                                            tcp        10.96.0.10:9153         10.100.0.2:9153
2feecfab-8c8d-431c-857a-37ee4ea94085    vpc-alpha-udp-lo    udp        10.96.0.10:53           10.100.0.2:53

root@c4i-bastion:/home/ubuntu# kubectl get pod -n kube-system vpc-dns-alpha-5b5c864c98-jnp2w -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP           NODE          NOMINATED NODE   READINESS GATES
vpc-dns-alpha-5b5c864c98-jnp2w   1/1     Running   0          6h50m   10.100.0.7   c4i-bastion   <none>           <none>
root@c4i-bastion:/home/ubuntu# kubectl exec -tn alpha dnsutils -- nslookup kubernetes.default.svc.cluster.local 10.100.0.7
Server:     10.100.0.7
Address:    10.100.0.7#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1

root@c4i-bastion:/home/ubuntu# kubectl exec -tn alpha dnsutils -- nslookup google.it 10.100.0.7
Server:     10.100.0.7
Address:    10.100.0.7#53

Name:   google.it
Address: 142.250.180.131

is vpc dns deployment pod IP is 10.100.0.7 ?


in your custom vpc

10.96.0.10:53           10.100.0.2:53  should be  10.96.0.10:53           10.100.0.7:53  ????
CiraciNicolo commented 1 week ago

Yes, the VPC DNS pod is addressed at 10.100.0.7. I've no idea what happened, but now the LB is correct but still no dns resolution:

61dd9fec-032a-4a32-a7b6-d3959c688652    vpc-alpha-tcp-lo    tcp        10.96.0.10:53           10.100.0.7:53
                                                            tcp        10.96.0.10:9153         10.100.0.7:9153
2feecfab-8c8d-431c-857a-37ee4ea94085    vpc-alpha-udp-lo    udp        10.96.0.10:53           10.100.0.7:53
CiraciNicolo commented 1 week ago

Do you have any further advice? Load balancing for "normal" services works. As you can see I can spin up an nginx deployment and reach it via service. In the lb-list nginx svc is present.

root@c4i-bastion:/home/ubuntu# kubectl -n alpha get svc
NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
nginx   ClusterIP   10.104.73.178   <none>        80/TCP    179m
root@c4i-bastion:/home/ubuntu# kubectl -n alpha get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP           NODE          NOMINATED NODE   READINESS GATES
dnsutils                 1/1     Running   0          3h10m   10.169.0.2   c4i-bastion   <none>           <none>
nginx-7854ff8877-mwztv   1/1     Running   0          179m    10.169.0.5   c4i-bastion   <none>           <none>
curl                     1/1     Running   0          121m    10.169.0.6   c4i-bastion   <none>           <none>
root@c4i-bastion:/home/ubuntu# kubectl -n kube-system get pod vpc-dns-alpha-5f8755bf9d-cqvzk -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP           NODE          NOMINATED NODE   READINESS GATES
vpc-dns-alpha-5f8755bf9d-cqvzk   1/1     Running   0          11m   10.169.0.7   c4i-bastion   <none>           <none>
root@c4i-bastion:/home/ubuntu# kubectl exec -itn alpha curl -- nslookup nginx.alpha.svc.cluster.local 10.169.0.7
Server:     10.169.0.7
Address:    10.169.0.7:53

Name:   nginx.alpha.svc.cluster.local
Address: 10.104.73.178

root@c4i-bastion:/home/ubuntu# kubectl exec -itn alpha curl -- curl 10.104.73.178:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@c4i-bastion:/home/ubuntu# kubectl ko nbctl lb-list
UUID                                    LB                  PROTO      VIP                     IPs
f206f992-dc25-4fad-be32-c89ff36676e9    cluster-tcp-load    tcp        10.100.69.65:6641       172.16.150.111:6641
                                                            tcp        10.101.48.228:443       172.16.150.111:4244
                                                            tcp        10.102.239.103:10665    172.16.150.111:10665
                                                            tcp        10.106.219.145:8080     10.69.0.4:8080
                                                            tcp        10.96.0.1:443           172.16.150.111:6443
                                                            tcp        10.96.166.21:80         10.69.0.14:4245
                                                            tcp        10.98.131.242:80        10.69.0.13:8081
                                                            tcp        10.99.43.117:10660      172.16.150.111:10660
                                                            tcp        10.99.7.58:6643         172.16.150.111:6643
                                                            tcp        10.99.73.136:10661      172.16.150.111:10661
                                                            tcp        10.99.94.62:6642        172.16.150.111:6642
4c311f4f-21ba-4605-a053-794f632a4b29    vpc-alpha-tcp-lo    tcp        10.104.73.178:80        10.169.0.5:80
                                                            tcp        10.96.0.10:53           10.169.0.7:53
                                                            tcp        10.96.0.10:9153         10.169.0.7:9153
da2dc500-a1b9-4f32-8de1-87cf714438b0    vpc-alpha-udp-lo    udp        10.96.0.10:53           10.169.0.7:53
bobz965 commented 5 days ago

image

are these lb work? inside the pod?

and how about these svc? inside the pod?

image

zhangzujian commented 3 days ago
ipam:
  operator:
    clusterPoolIPv4PodCIDRList: 10.69.0.0/16

clusterPoolIPv4PodCIDRList should be a different CIDR.

The default cluster/pod cidr in Kube-OVN is 10.16.0.0/16, and the default join CIDR is 100.64.0.0/16, so the clusterPoolIPv4PodCIDRList value used in Makefile#L834 is 100.65.0.0/16.

Please change this value and try again.

zhangzujian commented 3 days ago

FYI, I cannot reproduce this problem in master/v1.12.18. The DNS works well:

$ kubectl get subnet
NAME             PROVIDER              VPC           PROTOCOL   CIDR               PRIVATE   NAT     DEFAULT   GATEWAYTYPE   V4USED   V4AVAILABLE   V6USED   V6AVAILABLE   EXCLUDEIPS          U2OINTERCONNECTIONIP
join             ovn                   ovn-cluster   IPv4       100.64.0.0/16      false     false   false     distributed   1        65532         0        0             ["100.64.0.1"]
ovn-default      ovn                   ovn-cluster   IPv4       10.16.0.0/16       false     true    true      distributed   3        65530         0        0             ["10.16.0.1"]
s1               ovn                   vpc1          IPv4       99.99.99.0/24      false     false   false     distributed   2        251           0        0             ["99.99.99.1"]
vpc-dns-subnet   ovn-nad.default.ovn   ovn-cluster   IPv4       100.100.100.0/24   false     false   false     distributed   0        253           0        0             ["100.100.100.1"]
$ kubectl -n kube-system get pod vpc-dns-dns1-759b54bc4f-s9l6t -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP           NODE                     NOMINATED NODE   READINESS GATES
vpc-dns-dns1-759b54bc4f-s9l6t   1/1     Running   0          14m   99.99.99.4   kube-ovn-control-plane   <none>           <none>
$ kubectl get po -o wide
NAME            READY   STATUS    RESTARTS   AGE   IP           NODE                     NOMINATED NODE   READINESS GATES
kubeovn-ksc5v   1/1     Running   0          15m   99.99.99.2   kube-ovn-control-plane   <none>           <none>
$ kubectl exec kubeovn-ksc5v -- nslookup kubernetes.default.svc.cluster.local. 99.99.99.4
Server:         99.99.99.4
Address:        99.99.99.4#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1
$ kubectl exec kubeovn-ksc5v -- nslookup kubernetes.default.svc.cluster.local. 10.96.0.3
Server:         10.96.0.3
Address:        10.96.0.3#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1