Closed csafg2 closed 5 years ago
My bad, it seems that kube-flannel
is configured multiple times in that deployment file. so you need to pass the argument to all of them.
i did configure it at all places, still not working for me , have any advice ?
apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes:
runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny
allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false
allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: []
hostPID: false hostIPC: false hostNetwork: true hostPorts:
seLinux:
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules:
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects:
apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations:
apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm64 tolerations:
apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm tolerations:
apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: ppc64le tolerations:
apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: s390x tolerations:
not work + 1
i finally solved it! just add - --iface={your_iface_name} for every node of "args"
add this args to all Daemonset and it works for me [root@k8s-master01 ~]# bridge fdb show dev flannel.1 a2:05:9f:ac:40:da dst 192.168.56.103 self permanent 0e:22:e2:f3:09:04 dst 192.168.56.103 self permanent ce:cf:6f:d7:c9:4c dst 192.168.56.102 self permanent 8e:32:8d:27:ac:9a dst 192.168.56.102 self permanent
To be specific, I added multiple DaemonSet into the kube-flannel.yml and used a node selector to select the specific node with the specific iface name.
Manually setting
--iface=enp0s8
inkube-flannel
seems to be totally ignored.kubectl apply -f flannel.yaml
reports everything as OK. however when looking at logsflannel
is still using default interface.Expected Behavior
enp0s8
interface should be used.Current Behavior
Always uses the default interface.
Context
I realized this issue when trying to setup a service with
ExtranalTrafficPolicy
set to cluster. and my pods were responding only on nodes where that specific pod is presentYour Environment