kubesphere / ks-installer

Install KubeSphere on existing Kubernetes cluster
https://kubesphere.io
Apache License 2.0
527 stars 743 forks source link

miniinstall openldap pod always restart #501

Open SemiLavender opened 4 years ago

SemiLavender commented 4 years ago

intsaller: kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/kubesphere-minimal.yaml the describle: kubectl describe pod -n kubesphere-system openldap-0 Name: openldap-0 Namespace: kubesphere-system Priority: 0 PriorityClassName: Node: k8s-master/192.168.1.3 Start Time: Thu, 21 Nov 2019 11:29:25 +0000 Labels: app.kubernetes.io/instance=ks-openldap app.kubernetes.io/name=openldap-ha controller-revision-hash=openldap-5b89576789 statefulset.kubernetes.io/pod-name=openldap-0 Annotations: Status: Running IP: 10.233.111.170 Controlled By: StatefulSet/openldap Containers: openldap-ha: Container ID: docker://ba116b457d3416df08f41984392ba6e36598f119551d0f297e3f0003de855537 Image: osixia/openldap:1.3.0 Image ID: docker-pullable://osixia/openldap@sha256:cb3f5fea3c3203acddc3e6b8a70642a0f994d89be3ec5f0e50621b2a9ea17a83 Port: 389/TCP Host Port: 0/TCP Args: --copy-service --loglevel=warning State: Running Started: Fri, 22 Nov 2019 02:30:17 +0000 Last State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 22 Nov 2019 02:28:39 +0000 Finished: Fri, 22 Nov 2019 02:30:16 +0000 Ready: False Restart Count: 216 Liveness: tcp-socket :389 delay=30s timeout=1s period=15s #success=1 #failure=3 Readiness: tcp-socket :389 delay=30s timeout=1s period=15s #success=1 #failure=3 Environment: LDAP_ORGANISATION: kubesphere LDAP_DOMAIN: kubesphere.io LDAP_CONFIG_PASSWORD: admin LDAP_ADMIN_PASSWORD: admin LDAP_REPLICATION: false LDAP_TLS: false LDAP_REMOVE_CONFIG_AFTER_SETUP: true MY_POD_NAME: openldap-0 (v1:metadata.name) HOSTNAME: $(MY_POD_NAME).openldap Mounts: /etc/ldap/slapd.d from openldap-pvc (rw,path="ldap-config") /var/lib/ldap from openldap-pvc (rw,path="ldap-data") /var/run/secrets/kubernetes.io/serviceaccount from default-token-wxl4z (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: openldap-pvc: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: openldap-pvc-openldap-0 ReadOnly: false default-token-wxl4z: Type: Secret (a volume populated by a Secret) SecretName: default-token-wxl4z Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: CriticalAddonsOnly node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message


Normal Pulled 35m (x208 over 15h) kubelet, k8s-master Container image "osixia/openldap:1.3.0" already present on machine Warning BackOff 5m44s (x2600 over 14h) kubelet, k8s-master Back-off restarting failed container Warning Unhealthy 40s (x1068 over 15h) kubelet, k8s-master Readiness probe failed: dial tcp 10.233.111.170:389: connect: connection refused

SemiLavender commented 4 years ago

the kubeSphere version is 2.1

sbhnet commented 4 years ago

kubeSphere version is 2.1 openldap & redis allways fail . usr/local/bin/helm upgrade --install ks-openldap /etc/kubesphere/openldap-ha -f /etc/kubesphere/custom-values-openldap.yaml --set fullnameOverride=openldap --namespace kubesphere-system /usr/local/bin/kubectl -n kubesphere-system apply -f /etc/kubesphere/common/redis.yaml

SemiLavender commented 4 years ago

@sbhnet I cannot find the dir of /etc/kubesphere. I find the openldap installer command {{ bin_dir }}/helm upgrade --install ks-openldap {{ kubesphere_dir }}/openldap-ha -f {{ kubesphere_dir }}/custom-values-openldap.yaml --set fullnameOverride=openldap --namespace kubesphere-system in ansible playbook, regreted , I do not know the facts kubesphere_dir.