kubesphere / ks-installer

Install KubeSphere on existing Kubernetes cluster
https://kubesphere.io
Apache License 2.0
530 stars 744 forks source link

ks-account-deployment一直处于Init:0/2状态,原因不明 #17

Open ericswitch opened 5 years ago

ericswitch commented 5 years ago

kubectl get pod -n kubesphere-system
NAME READY STATUS RESTARTS AGE ks-account-697996989f-dbn77 0/1 Init:0/2 0 3h13m ks-apigateway-7bb9bccc6d-wmzx2 1/1 Running 0 3h29m ks-console-f5bf76dd4-2qlj8 1/1 Running 0 3h28m ks-console-f5bf76dd4-zc5gj 1/1 Running 0 3h28m ks-controller-manager-69666fc668-v5pdk 1/1 Running 0 3h29m ks-docs-77c4796dc9-6wsjj 1/1 Running 0 4h49m openldap-84857748b4-672xl 1/1 Running 0 4h52m redis-78ff75bddc-rh6rb 1/1 Running 0 4h52m

ks-account-697996989f-dbn77这个pod始终处于初始化状态。 看下事件: kubectl describe po ks-account-697996989f-dbn77 -n kubesphere-system Name: ks-account-697996989f-dbn77 Namespace: kubesphere-system Priority: 0 PriorityClassName: Node: 10.221.8.63/10.221.8.63 Start Time: Sat, 17 Aug 2019 16:52:13 +0800 Labels: app=ks-account pod-template-hash=697996989f tier=backend version=advanced-2.0.0 Annotations: Status: Pending IP: 195.168.53.232 Controlled By: ReplicaSet/ks-account-697996989f Init Containers: wait-redis: Container ID: docker://b5b31e791c23906c90d57ebb5ba151f7298137898131d9bdf25563f5bd7c4db8 Image: busybox:1.28.4 Image ID: docker-pullable://10.237.79.203/test/library/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 Port: Host Port: Command: sh -c until nc -z redis.kubesphere-system.svc 6379; do echo "waiting for redis"; sleep 2; done; State: Running Started: Sat, 17 Aug 2019 16:52:14 +0800 Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-pswpl (ro) wait-ldap: Container ID:
Image: busybox:1.28.4 Image ID:
Port: Host Port: Command: sh -c until nc -z openldap.kubesphere-system.svc 389; do echo "waiting for ldap"; sleep 2; done; State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-pswpl (ro) Containers: ks-account: Container ID:
Image: kubesphere/ks-account:advanced-2.0.2 Image ID:
Port: 9090/TCP Host Port: 0/TCP Command: ks-iam --v=4 --logtostderr=true --devops-database-connection=root:password@tcp(openpitrix-db.openpitrix-system.svc:3306)/devops --ldap-server=openldap.kubesphere-system.svc:389 --redis-server=redis.kubesphere-system.svc:6379 --ldap-manager-dn=cn=admin,dc=kubesphere,dc=io --ldap-manager-password=$(LDAP_PASSWORD) --ldap-user-search-base=ou=Users,dc=kubesphere,dc=io --ldap-group-search-base=ou=Groups,dc=kubesphere,dc=io --jwt-secret=$(JWT_SECRET) --admin-password=P@88w0rd --token-expire-time=0h --jenkins-address=http://ks-jenkins.kubesphere-devops-system.svc/ --jenkins-password=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFkbWluQGt1YmVzcGhlcmUuaW8iLCJleHAiOjE4MTYyMzkwMjIsInVzZXJuYW1lIjoiYWRtaW4ifQ.86ofN704ZPc1o-yyXnF-up5nK1w3nHeRlGWcwNLCa-k --master-url=10.221.8.155:8443 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Limits: cpu: 1 memory: 500Mi Requests: cpu: 30m memory: 200Mi Environment: KUBECTL_IMAGE: kubesphere/kubectl:advanced-1.0.0 JWT_SECRET: <set to the key 'jwt-secret' in secret 'ks-account-secret'> Optional: false LDAP_PASSWORD: <set to the key 'ldap-admin-password' in secret 'ks-account-secret'> Optional: false Mounts: /etc/ks-iam from user-init (rw) /etc/kubernetes/pki from ca-dir (rw) /etc/kubesphere/rules from policy-rules (rw) /var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-pswpl (ro) Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True Volumes: policy-rules: Type: ConfigMap (a volume populated by a ConfigMap) Name: policy-rules Optional: false ca-dir: Type: Secret (a volume populated by a Secret) SecretName: kubesphere-ca Optional: false user-init: Type: ConfigMap (a volume populated by a ConfigMap) Name: user-init Optional: false kubesphere-token-pswpl: Type: Secret (a volume populated by a Secret) SecretName: kubesphere-token-pswpl Optional: false QoS Class: Burstable Node-Selectors: Tolerations: CriticalAddonsOnly node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message


Normal Scheduled 3h16m default-scheduler Successfully assigned kubesphere-system/ks-account-697996989f-dbn77 to 10.221.8.63 Normal Pulled 3h16m kubelet, 10.221.8.63 Container image "busybox:1.28.4" already present on machine Normal Created 3h16m kubelet, 10.221.8.63 Created container Normal Started 3h16m kubelet, 10.221.8.63 Started container

再看下日志: kubectl logs ks-account-697996989f-dbn77 -n kubesphere-system Error from server (BadRequest): container "ks-account" in pod "ks-account-697996989f-dbn77" is waiting to start: PodInitializing

另外还有其他的pod也是同样的状况: kubectl get po -n openpitrix-system NAME READY STATUS RESTARTS AGE openpitrix-api-gateway-deployment-587cc46874-7xrdd 0/1 Init:0/2 0 3h33m openpitrix-app-db-ctrl-job-9g7br 0/1 Init:0/1 0 3h33m openpitrix-app-manager-deployment-595dcd76f-b6b2q 0/1 Init:0/2 0 4h58m openpitrix-category-manager-deployment-7968d789d6-s9dd8 0/1 Init:0/2 0 4h58m openpitrix-cluster-db-ctrl-job-sfk54 0/1 Init:0/1 0 3h33m openpitrix-cluster-manager-deployment-6dcd96b68b-jqcb4 0/1 Init:0/2 0 4h58m openpitrix-db-deployment-6864df4f99-fg9d9 1/1 Running 0 4h58m openpitrix-db-init-job-dgvmp 0/1 Init:0/1 0 3h33m openpitrix-etcd-deployment-58845c4648-fxvh5 1/1 Running 0 4h58m openpitrix-iam-db-ctrl-job-kbtw7 0/1 Init:0/1 0 3h33m openpitrix-iam-service-deployment-864df9fb6f-w75qd 0/1 Init:0/2 0 4h58m openpitrix-job-db-ctrl-job-cf9z7 0/1 Init:0/1 0 3h33m openpitrix-job-manager-deployment-588858bcb9-ggcbt 0/1 Init:0/2 0 4h58m openpitrix-minio-deployment-84d5f9c94b-7x49m 1/1 Running 0 4h58m openpitrix-repo-db-ctrl-job-bp94x 0/1 Init:0/1 0 3h33m openpitrix-repo-indexer-deployment-5f4c895b54-5kk6k 0/1 Init:0/2 0 4h58m openpitrix-repo-manager-deployment-84fd5b5fdf-w6vgp 0/1 Init:0/2 0 4h58m openpitrix-runtime-db-ctrl-job-88gzw 0/1 Init:0/1 0 3h33m openpitrix-runtime-manager-deployment-5fcbb6f447-b47xq 0/1 Init:0/2 0 4h58m openpitrix-task-db-ctrl-job-nxz4z 0/1 Init:0/1 0 3h33m openpitrix-task-manager-deployment-59578dc9d6-d6qg5 0/1 Init:0/2 0 4h58m 目前无法查明原因!!!

pixiake commented 5 years ago
  1. 看下kubesphere-system下的redis和openldap是否启动
  2. 检查一下环境dns解析 看下coredns的日志 机器里的/etc/resolv.conf里边是不是有不能用的dns
ericswitch commented 5 years ago

kubesphere-system下的redis和openldap容器全部启动,coredns日志报错:

kubectl logs -f coredns-6444b5c495-vnl9n -n kube-system
.:53
2019-08-18T02:16:16.176Z [INFO] CoreDNS-1.3.1
2019-08-18T02:16:16.176Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-08-18T02:16:16.176Z [INFO] plugin/reload: Running configuration MD5 = 18863a4483c30117a60ae2332bab9448
2019-08-18T02:16:36.178Z [ERROR] plugin/errors: 2 8946861779878406290.7985387816779030139. HINFO: unreachable backend: read udp 195.168.169.97:40481->8.8.4.4:53: i/o timeout
2019-08-18T02:16:37.357Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. A: unreachable backend: read udp 195.168.169.97:41315->8.8.8.8:53: i/o timeout
2019-08-18T02:16:37.424Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:56029->8.8.4.4:53: i/o timeout
2019-08-18T02:16:37.442Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:46093->8.8.4.4:53: i/o timeout
2019-08-18T02:16:37.586Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:41711->8.8.4.4:53: i/o timeout
2019-08-18T02:16:37.771Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:57365->8.8.8.8:53: i/o timeout
2019-08-18T02:16:37.790Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:44742->8.8.8.8:53: i/o timeout
2019-08-18T02:16:37.801Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:59569->8.8.4.4:53: i/o timeout
2019-08-18T02:16:37.864Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:32916->8.8.8.8:53: i/o timeout
2019-08-18T02:16:38.039Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:58019->8.8.8.8:53: i/o timeout
2019-08-18T02:16:38.079Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. A: unreachable backend: read udp 195.168.169.97:44609->8.8.8.8:53: i/o timeout
2019-08-18T02:16:38.227Z [ERROR] plugin/errors: 2 redis.kubesphere-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:43157->8.8.8.8:53: i/o timeout
2019-08-18T02:16:38.241Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:54764->8.8.4.4:53: i/o timeout
2019-08-18T02:16:38.252Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. A: unreachable backend: read udp 195.168.169.97:52769->8.8.8.8:53: i/o timeout
2019-08-18T02:16:38.306Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:35486->8.8.8.8:53: i/o timeout
2019-08-18T02:16:38.346Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:34806->8.8.8.8:53: i/o timeout
2019-08-18T02:16:38.348Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:40756->8.8.4.4:53: i/o timeout
2019-08-18T02:16:38.449Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:39806->8.8.8.8:53: i/o timeout
2019-08-18T02:16:38.540Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:57560->8.8.8.8:53: i/o timeout
2019-08-18T02:16:38.655Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:40721->8.8.4.4:53: i/o timeout
2019-08-18T02:16:38.716Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:48640->8.8.8.8:53: i/o timeout
2019-08-18T02:16:39.178Z [ERROR] plugin/errors: 2 8946861779878406290.7985387816779030139. HINFO: unreachable backend: read udp 195.168.169.97:55308->8.8.8.8:53: i/o timeout
2019-08-18T02:16:39.237Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:48076->8.8.4.4:53: i/o timeout
2019-08-18T02:16:39.241Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. AAAA: unreachable backend: read udp 195.168.169.97:55854->8.8.4.4:53: i/o timeout
2019-08-18T02:16:40.657Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. A: unreachable backend: read udp 195.168.169.97:38751->8.8.4.4:53: i/o timeout
2019-08-18T02:16:40.812Z [ERROR] plugin/errors: 2 openpitrix-db.openpitrix-system.svc. A: unreachable backend: read udp 195.168.169.97:40073->8.8.4.4:53: i/o timeout

求解决办法!!!

pixiake commented 5 years ago

环境可以联网吗?机器有防火墙吗?或者云主机有安全组吗?检查下这些配置吧

ericswitch commented 5 years ago

环境不可以联网,防火墙和selinux也全部是关闭的!

pixiake commented 5 years ago

不可以联网的话 resolve.conf里边的那几个dns也肯定连不上呀 所以coredns会报超时 修改下cordns的配置 或者把那些连不上的dns注释掉

ericswitch commented 5 years ago
cat coredns.yaml
# __MACHINE_GENERATED_WARNING__

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  replicas: 2
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: 10.237.79.203/kube-system/coredns
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 167.167.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

不知道修改哪里,请指教!

pixiake commented 5 years ago

可以将configmap 中包含/etc/resolving.conf 的那行注释掉 然后重启coredns 试下

ericswitch commented 5 years ago

十分感谢,我将configmap 中包含/etc/resolving.conf 的那行注释掉 然后重启coredns,安装成功了。我检查了所有的容器,只有一个出现了CrashLoopBackOff,它是:openpitrix-iam-service-deployment-864df9fb6f-wfsmc,看日志:

 kubectl logs -f openpitrix-iam-service-deployment-864df9fb6f-wfsmc -n openpitrix-system
2019-08-19 02:05:10.83535 -INFO- Release OpVersion: [v0.3.5] (grpc_server.go:79)
2019-08-19 02:05:10.83541 -INFO- Git Commit Hash: [9333eae7354fc6634785df7a2fef35e74ed8dce5] (grpc_server.go:79)
2019-08-19 02:05:10.83545 -INFO- Build Time: [2018-11-22 06:37:13] (grpc_server.go:79)
2019-08-19 02:05:10.83547 -INFO- Service [iam-service] start listen at port [9115] (grpc_server.go:81)
2019-08-19 02:05:10.84285 -INFO- Init IAM client [4NtUXTKdtK3xPoeO5TJfaImAl] done (init.go:39)
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x94ceb7]

goroutine 47 [running]:
openpitrix.io/openpitrix/vendor/github.com/sony/sonyflake.(*Sonyflake).NextID(0x0, 0x0, 0x0, 0x0)
        /go/src/openpitrix.io/openpitrix/vendor/github.com/sony/sonyflake/sonyflake.go:89 +0x37
openpitrix.io/openpitrix/pkg/util/idutil.GetIntId(0xc00041a000)
        /go/src/openpitrix.io/openpitrix/pkg/util/idutil/id.go:45 +0x2d
openpitrix.io/openpitrix/pkg/util/idutil.GetUuid(0xc5502b, 0x4, 0x0, 0x3c)
        /go/src/openpitrix.io/openpitrix/pkg/util/idutil/id.go:54 +0x29
openpitrix.io/openpitrix/pkg/models.NewUserId(0xc0004fe000, 0x0)
        /go/src/openpitrix.io/openpitrix/pkg/models/user.go:18 +0x36
openpitrix.io/openpitrix/pkg/models.NewUser(0xc0000380a7, 0x5, 0xc000036dc0, 0x3c, 0xc0000380a7, 0xc, 0xc5bab9, 0xc, 0x0, 0x0, ...)
        /go/src/openpitrix.io/openpitrix/pkg/models/user.go:39 +0x26
openpitrix.io/openpitrix/pkg/service/iam.initIAMAccount()
        /go/src/openpitrix.io/openpitrix/pkg/service/iam/init.go:61 +0x3dc
created by openpitrix.io/openpitrix/pkg/service/iam.Serve
        /go/src/openpitrix.io/openpitrix/pkg/service/iam/server.go:25 +0x5f

求解!

pixiake commented 5 years ago
kubectl edit deploy -n openpitrix-system openpitrix-iam-service-deployment

在env里添加
- name: OPENPITRIX_ID_RANDOM_SEED
  value: yes

这样试下

ericswitch commented 5 years ago

可以了,十分感谢!!! 这样添加的:

chenJz1012 commented 4 years ago

1.看下kubesphere-system下的redis和openldap是否启动 2.检查一下环境dns解析 看下coredns的日志 机器里的/etc/resolv.conf里边是不是有不能用的dns

redis处于pending如何解决?@pixiake

NiHe001 commented 4 years ago

遇到了相似的问题,ks-account一直初始化失败

kubectl get pod -n kubesphere-system 
NAME                                     READY   STATUS     RESTARTS   AGE
ks-account-d4c5cdf9d-bsnbr               0/1     Init:0/2   0          3m49s
ks-account-d4c5cdf9d-jzwsw               0/1     Init:0/2   0          137m
ks-account-d4c5cdf9d-ws65l               0/1     Init:0/2   0          132m
ks-apigateway-65dd54f989-52crj           1/1     Running    5          160m
ks-apigateway-65dd54f989-5z94b           1/1     Running    5          160m
ks-apigateway-65dd54f989-d7t7h           1/1     Running    0          154m
ks-apiserver-6d7ddd7d-f6tjg              1/1     Running    0          160m
ks-apiserver-6d7ddd7d-gvxrx              1/1     Running    0          154m
ks-apiserver-6d7ddd7d-q989s              1/1     Running    0          160m
ks-console-6f7f75bb48-72rnb              1/1     Running    0          154m
ks-console-6f7f75bb48-csg79              1/1     Running    0          159m
ks-console-6f7f75bb48-gw7bb              1/1     Running    0          159m
ks-controller-manager-6dd9b76d75-hd8rk   1/1     Running    0          154m
ks-controller-manager-6dd9b76d75-sjhr5   1/1     Running    0          160m
ks-controller-manager-6dd9b76d75-ww5hm   1/1     Running    0          160m
ks-installer-556774c9fb-l55dt            1/1     Running    0          166m
openldap-0                               1/1     Running    0          160m
openldap-1                               1/1     Running    0          154m
redis-ha-haproxy-75776f44c4-6z5h2        1/1     Running    1          160m
redis-ha-haproxy-75776f44c4-fs2dx        1/1     Running    1          160m
redis-ha-haproxy-75776f44c4-mx5cd        1/1     Running    0          154m
redis-ha-server-0                        2/2     Running    0          154m
redis-ha-server-1                        2/2     Running    0          131m
redis-ha-server-2                        2/2     Running    0          158m

查看详细信息:

kubectl describe pod ks-account-d4c5cdf9d-bsnbr -n kubesphere-system
Name:           ks-account-d4c5cdf9d-bsnbr
Namespace:      kubesphere-system
Priority:       0
Node:           t-jn-kbs-02/192.169.1.56
Start Time:     Mon, 13 Jan 2020 13:35:14 +0800
Labels:         app=ks-account
                pod-template-hash=d4c5cdf9d
                tier=backend
                version=v2.1.0
Annotations:    <none>
Status:         Pending
IP:             10.244.1.11
Controlled By:  ReplicaSet/ks-account-d4c5cdf9d
Init Containers:
  wait-redis:
    Container ID:  docker://40ce8c629a0ec70461991aba53e032bc685dd88236cc88f5e04425c9b92fd1c7
    Image:         busybox:1.28.4
    Image ID:      docker-pullable://busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      until nc -z redis.kubesphere-system.svc 6379; do echo "waiting for redis"; sleep 2; done;
    State:          Running
      Started:      Mon, 13 Jan 2020 13:35:15 +0800
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-5rgrz (ro)
  wait-ldap:
    Container ID:  
    Image:         busybox:1.28.4
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      until nc -z openldap.kubesphere-system.svc 389; do echo "waiting for ldap"; sleep 2; done;
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-5rgrz (ro)
Containers:
  ks-account:
    Container ID:  
    Image:         kubesphere/ks-account:v2.1.0
    Image ID:      
    Port:          9090/TCP
    Host Port:     0/TCP
    Command:
      ks-iam
      --logtostderr=true
      --jwt-secret=$(JWT_SECRET)
      --admin-password=$(ADMIN_PASSWORD)
      --enable-multi-login=False
      --token-idle-timeout=40m
      --redis-url=redis://redis.kubesphere-system.svc:6379
      --generate-kubeconfig=False
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  500Mi
    Requests:
      cpu:     20m
      memory:  100Mi
    Environment:
      KUBECTL_IMAGE:   kubesphere/kubectl:v1.0.0
      JWT_SECRET:      <set to the key 'jwt-secret' in secret 'ks-account-secret'>      Optional: false
      ADMIN_PASSWORD:  <set to the key 'admin-password' in secret 'ks-account-secret'>  Optional: false
    Mounts:
      /etc/ks-iam from user-init (rw)
      /etc/kubesphere from kubesphere-config (rw)
      /etc/kubesphere/rules from policy-rules (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-5rgrz (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  policy-rules:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      policy-rules
    Optional:  false
  user-init:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      user-init
    Optional:  false
  kubesphere-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kubesphere-config
    Optional:  false
  kubesphere-token-5rgrz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubesphere-token-5rgrz
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 60s
                 node.kubernetes.io/unreachable:NoExecute for 60s
Events:
  Type    Reason     Age    From                  Message
  ----    ------     ----   ----                  -------
  Normal  Scheduled  4m46s  default-scheduler     Successfully assigned kubesphere-system/ks-account-d4c5cdf9d-bsnbr to t-jn-kbs-02
  Normal  Pulled     4m45s  kubelet, t-jn-kbs-02  Container image "busybox:1.28.4" already present on machine
  Normal  Created    4m45s  kubelet, t-jn-kbs-02  Created container wait-redis
  Normal  Started    4m45s  kubelet, t-jn-kbs-02  Started container wait-redis

@pixiake 您好,能否帮忙看下我的ks-account一直在Started container wait-redis,可是redis的pod是正常状态,可能是什么原因呢?

wansir commented 4 years ago

@1753939775 可以看看这个 https://github.com/kubesphere/kubesphere/issues/1555