liwenhe1993 / charts

ClickHouse is a free analytic DBMS for big data https://clickhouse.yandex
Apache License 2.0
35 stars 34 forks source link

Warning FailedScheduling 56s (x7 over 9m45s) default-scheduler 0/6 nodes are available: 6 node(s) didn't match node selector. #1

Open szbjb opened 5 years ago

szbjb commented 5 years ago
[root@123-40 clickhouse]# kubectl  get pods| grep click
[root@123-40 clickhouse]# kubectl  get pods| grep click
clickhouse-0                                                   0/1     Pending   0          9m24s
clickhouse-1                                                   0/1     Pending   0          9m24s
clickhouse-2                                                   0/1     Pending   0          9m24s
clickhouse-replica-0                                           0/1     Pending   0          9m24s
clickhouse-replica-1                                           0/1     Pending   0          9m24s
clickhouse-replica-2                                           0/1     Pending   0          9m24s
clickhouse-tabix-74c69f9c5f-8j2g5                              0/1     Pending   0          9m24s
[root@123-40 clickhouse]# kubectl  get pvc| grep click
clickhouse-data-clickhouse-0                   Bound    pvc-94731be4-e98b-11e9-a081-000c2964f032   500Gi      RWO            gluster-heketi   40m
clickhouse-data-clickhouse-1                   Bound    pvc-94743519-e98b-11e9-a081-000c2964f032   500Gi      RWO            gluster-heketi   40m
clickhouse-data-clickhouse-2                   Bound    pvc-94854cae-e98b-11e9-a081-000c2964f032   500Gi      RWO            gluster-heketi   40m
clickhouse-logs-clickhouse-0                   Bound    pvc-94736404-e98b-11e9-a081-000c2964f032   50Gi       RWO            gluster-heketi   40m
clickhouse-logs-clickhouse-1                   Bound    pvc-947496dd-e98b-11e9-a081-000c2964f032   50Gi       RWO            gluster-heketi   40m
clickhouse-logs-clickhouse-2                   Bound    pvc-949490f8-e98b-11e9-a081-000c2964f032   50Gi       RWO            gluster-heketi   40m
clickhouse-replica-data-clickhouse-replica-0   Bound    pvc-946f1d09-e98b-11e9-a081-000c2964f032   500Gi      RWO            gluster-heketi   40m
clickhouse-replica-data-clickhouse-replica-1   Bound    pvc-9470116d-e98b-11e9-a081-000c2964f032   500Gi      RWO            gluster-heketi   40m
clickhouse-replica-data-clickhouse-replica-2   Bound    pvc-94710fb8-e98b-11e9-a081-000c2964f032   500Gi      RWO            gluster-heketi   40m
clickhouse-replica-logs-clickhouse-replica-0   Bound    pvc-946f6d4b-e98b-11e9-a081-000c2964f032   50Gi       RWO            gluster-heketi   40m
clickhouse-replica-logs-clickhouse-replica-1   Bound    pvc-947066df-e98b-11e9-a081-000c2964f032   50Gi       RWO            gluster-heketi   40m
clickhouse-replica-logs-clickhouse-replica-2   Bound    pvc-947161ef-e98b-11e9-a081-000c2964f032   50Gi       RWO            gluster-heketi   40m
[root@123-40 clickhouse]# kubectl  describe  pods clickhouse-0
Name:               clickhouse-0
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app.kubernetes.io/instance=clickhouse
                    app.kubernetes.io/name=clickhouse
                    controller-revision-hash=clickhouse-85cc8dd68
                    statefulset.kubernetes.io/pod-name=clickhouse-0
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      StatefulSet/clickhouse
Init Containers:
  init:
    Image:      busybox:1.31.0
    Port:       <none>
    Host Port:  <none>
    Args:
      /bin/sh
      -c
      mkdir -p /etc/clickhouse-server/metrica.d

    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cc9t9 (ro)
Containers:
  clickhouse:
    Image:        yandex/clickhouse-server:19.14
    Ports:        8123/TCP, 9000/TCP, 9009/TCP
    Host Ports:   0/TCP, 0/TCP, 0/TCP
    Liveness:     tcp-socket :9000 delay=30s timeout=5s period=30s #success=1 #failure=3
    Readiness:    tcp-socket :9000 delay=30s timeout=5s period=30s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/clickhouse-server/config.d from clickhouse-config (rw)
      /etc/clickhouse-server/metrica.d from clickhouse-metrica (rw)
      /etc/clickhouse-server/users.d from clickhouse-users (rw)
      /var/lib/clickhouse from clickhouse-data (rw)
      /var/log/clickhouse-server from clickhouse-logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cc9t9 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  clickhouse-logs:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  clickhouse-logs-clickhouse-0
    ReadOnly:   false
  clickhouse-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  clickhouse-data-clickhouse-0
    ReadOnly:   false
  clickhouse-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      clickhouse-config
    Optional:  false
  clickhouse-metrica:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      clickhouse-metrica
    Optional:  false
  clickhouse-users:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      clickhouse-users
    Optional:  false
  default-token-cc9t9:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-cc9t9
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  56s (x7 over 9m45s)  default-scheduler  0/6 nodes are available: 6 node(s) didn't match node selector.
[root@123-40 clickhouse]#
szbjb commented 5 years ago

一直是Pending

liwenhe1993 commented 5 years ago

根据你给出pod的描述,得出的错误如下

Warning FailedScheduling 56s (x7 over 9m45s) default-scheduler 0/6 nodes are available: 6 node(s) didn't match node selector.

即没到找到相应标签的kubernetes节点,无法生成pod

目前有两种方法可以解决你的问题

  1. 注释values.yaml文件中的affinity的整个配置信息(由于我提交的时候忘记注释)

    affinity:
    nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: "application/clickhouse"
          operator: In
          values:
          - "true"
  2. 给你的kubernetes集群的每个节点添加标签

    kubectl label node <node_name> application/clickhouse=true
szbjb commented 5 years ago

好的  感谢 ,我再试试

------------------ 原始邮件 ------------------ 发件人: "liwenhe1993"<notifications@github.com>; 发送时间: 2019年10月8日(星期二) 下午2:54 收件人: "liwenhe1993/charts"<charts@noreply.github.com>; 抄送: "东方寻磊"<846492120@qq.com>; "Author"<author@noreply.github.com>; 主题: Re: [liwenhe1993/charts] Warning FailedScheduling 56s (x7 over 9m45s) default-scheduler 0/6 nodes are available: 6 node(s) didn't match node selector. (#1)

根据你给出pod的描述,得出的错误如下

Warning FailedScheduling 56s (x7 over 9m45s) default-scheduler 0/6 nodes are available: 6 node(s) didn't match node selector.

即没到找到相应标签的kubernetes节点,无法生成pod

目前有两种方法可以解决你的问题

注释values.yaml文件中的affinity的整个配置信息(由于我提交的时候忘记注释) affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "application/clickhouse" operator: In values: - "true"

给你的kubernetes集群的每个节点添加标签 kubectl label node <node_name> application/clickhouse=true
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.