Closed BurlyLuo closed 2 years ago
Hi @BurlyLuo ! Thanks for the report.
Could you share a few more things with us to help diagnose it...
/opt/cni/bin/multus --version
)NetworkAttachmentDefinition
for the macvlan interface?master
field still exists on the host? (e.g. in ip a
on the host)1.# ./multus --version
multus-cni version:v3.6, commit:c85b79f5ff5bcacaa45e2135d29e9afb6b84ed9b, date:2020-07-22T12:22:41+0000
2.Like below:
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-c
namespace: xx2
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth4.2360",
"mode": "bridge",
"ipam": {
"datastore": "kubernetes",
"range": "172.12.6.20-172.12.6.29/24",
"type": "whereabouts",
"log_file": "/tmp/whereabouts-macvlan-netconf-test-sm.log",
"log_level": "debug",
"kubernetes": {
"kubeconfig": "/etc/cni/net.d/whereabouts.d/whereabouts.kubeconfig"
}
}
}
3.YES
eth4.2360: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether f8:f2:1e:b9:dd:50 txqueuelen 1000 (Ethernet)
RX packets 110854 bytes 5551684 (5.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 343 bytes 26894 (26.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
from the workerload. we can see the parant interface for macvlan. but if i re-deploy the pod by helm chart. it back to normal.
Hi @BurlyLuo . Would you mind providing the yaml you used to create the pod as well?
# Source: hlb/templates/pods.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
#name: sm-instance
name: t-sm
namespace: ihlb-db
spec:
podManagementPolicy: OrderedReady
replicas: !!int 3
# revisionHistoryLimit: 10
selector:
matchLabels:
app: sm-pod
serviceName: sm-service
template:
metadata:
# end-range network
annotations:
k8s.v1.cni.cncf.io/networks: '[{"interface":"eth1","name":"macvlan-netconf-hlb"},{"interface":"eth2","name":"macvlan-corenet-hlb"}]' # end-if network
labels:
app: sm-pod
mavrole: mav-sm
cnfc_uuid: cnfc-sm
spec:
volumes:
- name: sm-configmap
configMap:
name: sm-configmap
-
name: "storage"
-
name: "etcd"
-
downwardAPI: {"items":[{"fieldRef":{"fieldPath":"metadata.labels"},"path":"labels"},{"fieldRef":{"fieldPath":"metadata.uid"},"path":"poduid"}]}
name: "podinfo" # end-range volumes # end-range containerKeys
containers:
- name: sm-container
command: [ "/bin/sh"]
args: ["-c", '/usr/IMS/current/tools/cfginit']
env:
- name: ETCD_DEBUG_LEVEL
value: info # end-range env
- name: LOG_STDOUT
value: enable # end-range env
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: "metadata.name" # end-range env
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: "metadata.namespace" # end-range env
- name: MY_POD_UID
valueFrom:
fieldRef:
fieldPath: "metadata.uid" # end-range env
- name: MY_SERVICE_NAME
value: sm-service # end-range env # end-range env
image: hlb-r_2_0_11_1-v1:r_2_0_11_1-v1
imagePullPolicy: IfNotPresent
securityContext: {"privileged":true} # end-if securityContext
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
volumeMounts:
- name: sm-configmap
mountPath: /data/configmap
readOnly: false
- {"mountPath":"/data/storage","name":"storage","readOnly":false}
- {"mountPath":"/var/lib/etcd","name":"etcd","readOnly":false}
- {"mountPath":"/etc/podinfo","name":"podinfo"} # end-range volumeMounts
#
resources: {"limits":{"memory":"16Gi"},"requests":{"memory":"2Gi"}}
volumeClaimTemplates:
- metadata:
name: storage
spec: {"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"2Gi"}},"storageClassName":"mavenir-sc"}
- metadata:
name: etcd
spec: {"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"2Gi"}},"storageClassName":"mavenir-sc"} # end-if storage.class # end-if storage.class
---
#
@nicklesimba
Still haven't been able to reproduce, yet.
Would you mind providing:
Kubernetes version: v1.19.6
Runtime: docker info[ Runtimes: runc ]
env: baremetal
would you please share the TS logical from the code level and logs? i can focus on TS the issue. maybe with the pause container's log can help about the issue? i haven't get the logical to analysics the issue as so far.
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 7 days.
Hello Teams: when use the multus cni to add the NAD(MACVLAN)to a pod, onec the pod comes up ,we can see the interface like eth1,eth2 .. but after few days, we can't find the interface in the pod use ip a to list the interface. but if we describe the pod, we can get the annocation in the pod's template with the eth1, eth2. but didn't appear in the pod.
View from the pod
711 View from the template:
miss the eth1 interface.