contiv / install

Contiv Installer
https://contiv.github.io
Other
114 stars 56 forks source link

contiv-netmaster always Pending #363

Open amwork2010 opened 5 years ago

amwork2010 commented 5 years ago

CentOS Linux release 7.6.1810 docker 18.09 kubeadm 1.13

kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.10.0.0/16 --apiserver-advertise-address=192.168.55.31 wget https://github.com/contiv/install/releases/download/1.2.0/contiv-1.2.0.tgz tar zxvf contiv-1.2.0.tgz cd contiv-1.2.0/ ./install/k8s/install.sh -n 192.168.55.31 ...... contiv netmaster is not ready !!

kubectl get po --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE kube-system contiv-netmaster-tqjnh 0/3 Pending 0 43s kube-system coredns-86c58d9df4-hzt6d 0/1 Pending 0 71m kube-system coredns-86c58d9df4-zwn9d 0/1 Pending 0 71m kube-system etcd-kubecontiv1 1/1 Running 0 70m kube-system kube-apiserver-kubecontiv1 1/1 Running 0 70m kube-system kube-controller-manager-kubecontiv1 1/1 Running 0 70m kube-system kube-proxy-f79dv 1/1 Running 0 71m kube-system kube-scheduler-kubecontiv1 1/1 Running 0 70m

kubectl describe pod contiv-netmaster-tqjnh -n kube-system

Name: contiv-netmaster-tqjnh Namespace: kube-system Priority: 0 PriorityClassName: Node: Labels: k8s-app=contiv-netmaster Annotations: prometheus.io/port: 9005 prometheus.io/scrape: true scheduler.alpha.kubernetes.io/critical-pod: Status: Pending IP:
Controlled By: ReplicaSet/contiv-netmaster Containers: netmaster-exporter: Image: contiv/stats Port: Host Port: Environment: CONTIV_ETCD: <set to the key 'contiv_etcd' of config map 'contiv-config'> Optional: false EXPORTER_MODE: netmaster Mounts: /var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-l5dxj (ro) contiv-netmaster: Image: contiv/netplugin:1.2.0 Port: Host Port: Environment: CONTIV_ROLE: netmaster CONTIV_NETMASTER_MODE: <set to the key 'contiv_mode' of config map 'contiv-config'> Optional: false CONTIV_NETMASTER_ETCD_ENDPOINTS: <set to the key 'contiv_etcd' of config map 'contiv-config'> Optional: false CONTIV_K8S_CONFIG: <set to the key 'contiv_k8s_config' of config map 'contiv-config'> Optional: false CONTIV_NETMASTER_FORWARD_MODE: <set to the key 'contiv_fwdmode' of config map 'contiv-config'> Optional: false CONTIV_NETMASTER_NET_MODE: <set to the key 'contiv_netmode' of config map 'contiv-config'> Optional: false Mounts: /var/contiv from var-contiv (rw) /var/log/contiv from var-log-contiv (rw) /var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-l5dxj (ro) contiv-api-proxy: Image: contiv/auth_proxy:1.2.0 Port: Host Port: Args: --tls-key-file=/var/contiv/auth_proxy_key.pem --tls-certificate=/var/contiv/auth_proxy_cert.pem --data-store-address=$(STORE_URL) --data-store-driver=$(STORE_DRIVER) --netmaster-address=localhost:9999 Environment: NO_NETMASTER_STARTUP_CHECK: 0 STORE_URL: <set to the key 'contiv_etcd' of config map 'contiv-config'> Optional: false STORE_DRIVER: etcd Mounts: /var/contiv from var-contiv (rw) /var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-l5dxj (ro) Conditions: Type Status PodScheduled False Volumes: var-contiv: Type: HostPath (bare host directory volume) Path: /var/contiv HostPathType:
var-log-contiv: Type: HostPath (bare host directory volume) Path: /var/log/contiv HostPathType:
contiv-netmaster-token-l5dxj: Type: Secret (a volume populated by a Secret) SecretName: contiv-netmaster-token-l5dxj Optional: false QoS Class: BestEffort Node-Selectors: node-role.kubernetes.io/master= Tolerations: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message


Warning FailedScheduling 7s (x11 over 94s) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

How to solve this problem? Thanks!