k8snetworkplumbingwg / multus-cni

A CNI meta-plugin for multi-homed pods in Kubernetes
Apache License 2.0
2.29k stars 575 forks source link

Communication between pods running on multiple nodes #1211

Closed nileshkumar-001 closed 3 months ago

nileshkumar-001 commented 6 months ago

Has anyone got communication between pods working with pods on different nodes to communicate with each other using the NIC added by Multus,

s1061123 commented 6 months ago

Could you please explain more because your description is not enough to troubleshooting. Appreciated if you fill following information. thanks.


What happend:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

nileshkumar-001 commented 6 months ago

Environment Setup: Azure Kubernetes Service (AKS) with Azure CNI Overlay as default CNI

What happened: Deployed AKS cluster with Azure CNI Overlay then deployed Multus thick client from https://github.com/k8snetworkplumbingwg/multus-cni/blob/a373a2286d3f74d2b0ba05f2592c2820f0087053/deployments/multus-daemonset-thick.yml then deployed Whereabouts for IPAM

https://github.com/k8snetworkplumbingwg/whereabouts/blob/061b1aca2c1a6789f4a8d7d6450496cb44a22acf/doc/crds/daemonset-install.yaml https://github.com/k8snetworkplumbingwg/whereabouts/blob/3b01e1992f555cdb71f976dcabad349aedc21619/doc/crds/whereabouts.cni.cncf.io_ippools.yaml https://github.com/k8snetworkplumbingwg/whereabouts/blob/3b01e1992f555cdb71f976dcabad349aedc21619/doc/crds/whereabouts.cni.cncf.io_overlappingrangeipreservations.yaml

Deployed a pod with the annotations to add the second network card , second network card gets added and gets assigned an IP but Pod 1 on Node 1 can communicate with Pod 2 on Node 1 but cannot communicate with Pod 3 on Node 2

What you expected to happen: I expect that pods on any node can communicate with each other on the second network interface. I suspect that routing information needs to be added but how can this be automated.

How to reproduce it (as minimally and precisely as possible): Running the above deployment steps and applying the 2 yaml files below

Anything else we need to know?:

Environment: Azure

Multus version: v4.0.2 image path and image ID (from 'docker images') Kubernetes version (use kubectl version): 127.7 Primary CNI for Kubernetes cluster: Azure CNI with Overlay OS (e.g. from /etc/os-release): File of '/etc/cni/net.d/' File of '/etc/cni/multus/net.d' NetworkAttachment info (use kubectl get net-attach-def -o yaml)

cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: whereabouts-conf
spec:
  config: '{
      "cniVersion": "0.3.0",
      "name": "whereaboutsexample",
      "type": "macvlan",
      "master": "eth0",
      "mode": "bridge",
      "ipam": {
        "type": "whereabouts",
        "range": "192.168.2.225/28"
      }
    }'
EOF

Target pod yaml info (with annotation, use kubectl get pod -o yaml)

cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: busybox-replicaset-02
  annotations:
    k8s.v1.cni.cncf.io/networks: whereabouts-conf
spec:
  replicas: 6  # Set the desired number of replicas
  selector:
    matchLabels:
      app: busybox-02
  template:
    metadata:
      labels:
        app: busybox-02
      annotations:
        k8s.v1.cni.cncf.io/networks: whereabouts-conf
    spec:
      containers:
      - name: busybox-02
        image: busybox:latest
        command: ["/bin/sh", "-c", "while true; do echo 'Hello from BusyBox'; sleep 10; done"]
EOF

Other log outputs (if you use multus logging)

github-actions[bot] commented 3 months ago

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 7 days.