kubesphere / kubesphere

The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management ⎈ 🖥 ☁️
https://kubesphere.io
Other
14.99k stars 2.13k forks source link

多节点安装时,提示:FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (1 retries left). #306

Closed drings-liu closed 5 years ago

drings-liu commented 5 years ago

安装高级版本V1.0.1版本多节点时,提示:

FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (1 retries left).

1 MASTER 2 NODE 安装几次都出现这样的现象 ,大家有没有遇到过?是什么原因造成?

drings-liu commented 5 years ago

补充下,不用多节点安装模式,采用单机安装正常

FeynmanZhou commented 5 years ago

安装高级版本V1.0.1版本多节点时,提示:

FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (1 retries left).

1 MASTER 2 NODE 安装几次都出现这样的现象 ,大家有没有遇到过?是什么原因造成?

@drings-liu 多节点安装时需要预先配置存储才可以安装成功。请问您在多节点安装之前,在 vars.yml 中存储是配置的哪一种类型呢?是否配置了文档推荐的存储呢:https://docs.kubesphere.io/advanced-v1.0/zh-CN/installation/storage-configuration/

ss178994161 commented 5 years ago

我是按照文档配置了存储,也是遇到同样的问题,安装的pod后台是pending状态,一直不好,安装了好几遍了。

root@master1:~/kubesphere-all-advanced-2.0.0-dev/conf# /usr/local/bin/kubectl -n openpitrix-system get pod
NAME                                                     READY   STATUS                  RESTARTS   AGE
openpitrix-api-gateway-deployment-744c986587-4ngf5       0/1     Init:ImagePullBackOff   0          13m
openpitrix-app-manager-deployment-7695b55c9b-tfgvk       0/1     Init:ImagePullBackOff   0          13m
openpitrix-category-manager-deployment-8d6cc4ff5-5wzwr   0/1     Init:ImagePullBackOff   0          13m
openpitrix-cluster-manager-deployment-58ff94d547-ngvq4   0/1     Init:ImagePullBackOff   0          13m
openpitrix-db-deployment-5f56b5b5cb-529gm                0/1     Pending                 0          13m
openpitrix-etcd-deployment-59d56647b6-8vxkt              0/1     Pending                 0          13m
openpitrix-iam-service-deployment-d8968f77b-fwzr8        0/1     Init:ImagePullBackOff   0          13m
openpitrix-job-manager-deployment-94577f68-wcn85         0/1     Init:ImagePullBackOff   0          13m
openpitrix-minio-deployment-6b58ccc587-hjlfq             0/1     Pending                 0          13m
openpitrix-repo-indexer-deployment-76d5564d45-84q6m      0/1     Init:ImagePullBackOff   0          13m
openpitrix-repo-manager-deployment-df4f759f8-fkvtr       0/1     Init:ImagePullBackOff   0          13m
openpitrix-runtime-manager-deployment-dd5fbbc4b-f5kwd    0/1     Init:ImagePullBackOff   0          13m
openpitrix-task-manager-deployment-544f6db9fc-5r96d      0/1     Init:ImagePullBackOff   0          13m
root@master1:~/kubesphere-all-advanced-2.0.0-dev/conf#

安装配置文件如下:

root@master1:~/kubesphere-all-advanced-2.0.0-dev/conf# cat  hosts.ini
; Parameters:
;  ansible_connection: Connection type to the host.
;  ansible_host: The name of the host to connect to.
;  ip: The ip of the host to connect to.
;  ansible_user: The default ssh user name to use.
;  ansible_ssh_pass: The ssh password to use.
;  ansible_become_pass: Allows you to set the privilege escalation password.

; If installer is ran from non-root user account who has sudo privilege already, then you could reference following configuration.
; e.g
;  master ansible_connection=local  ip=192.168.0.5  ansible_user=ubuntu  ansible_become_pass=Qcloud@123
;  node1  ansible_host=192.168.0.6  ip=192.168.0.6  ansible_user=ubuntu  ansible_become_pass=Qcloud@123
;  node2  ansible_host=192.168.0.8  ip=192.168.0.8  ansible_user=ubuntu  ansible_become_pass=Qcloud@123

; It is recommended to use root account to install, and following configuration uses root by default

[all]
master1 ansible_connection=local  ip=192.168.0.6 ansible_user=ubuntu  ansible_become_pass=1qaz@WSX
master2 ansible_host=192.168.0.7  ip=192.168.0.7 ansible_user=ubuntu  ansible_become_pass=1qaz@WSX
master3 ansible_host=192.168.0.8  ip=192.168.0.8 ansible_user=ubuntu  ansible_become_pass=1qaz@WSX
node1  ansible_host=192.168.0.9  ip=192.168.0.9 ansible_user=ubuntu  ansible_become_pass=1qaz@WSX

[kube-master]
master1
master2
master3

[local-registry]
master1

[kube-node]
node1

[etcd]
master1
master2
master3

[k8s-cluster:children]
kube-node
kube-master
root@master1:~/kubesphere-all-advanced-2.0.0-dev/conf#
root@master1:~/kubesphere-all-advanced-2.0.0-dev/conf# cat vars.yml
#config
######################################################################
# Storage configuration
######################################################################
# Local volume provisioner deployment(Only all-in-one)
local_volume_provisioner_enabled: false
local_volume_provisioner_storage_class: local
local_volume_is_default_class: false

# NFS-in-K8S provisioner deployment
nfs_in_k8s_enable: false
nfs_in_k8s_is_default_class: false

# QingCloud CSI
qingcloud_csi_enabled: true
qingcloud_csi_is_default_class: true
# Access key pair can be created in QingCloud console
qingcloud_access_key_id: ZAOSPUSRTKNMHPBJFVXL
qingcloud_secret_access_key: 9Lua0rujVMB5wuDEithleiQHNUGDg2z9rGgJRedt
# Zone should be the same as Kubernetes cluster
qingcloud_zone: pek3b
# QingCloud IaaS platform service url.
qingcloud_host: api.qingcloud.com
qingcloud_port: 443
qingcloud_protocol: https
qingcloud_uri: /iaas
qingcloud_connection_retries: 3
qingcloud_connection_timeout: 30
#The type of volume in QingCloud IaaS platform.
# 0 represents high performance volume
# 3 respresents super high performance volume.
# 1 or 2 represents high capacity volume depending on cluste's zone
# 5 represents enterprise distributed SAN (NeonSAN) volume
# 100 represents basic volume
# 200 represents SSD enterprise volume.
qingcloud_type: 0
qingcloud_maxSize: 500
qingcloud_minSize: 10
qingcloud_stepSize: 10
qingcloud_fsType: ext4
# 1 means single replica, 2 means multiple replicas. Default 2.
disk_replica: 2

# Ceph_rbd  deployment
ceph_rbd_enabled: false
ceph_rbd_is_default_class: false
ceph_rbd_storage_class: rbd
# e.g. ceph_rbd_monitors:
#   - 172.24.0.1:6789
#   - 172.24.0.2:6789
#   - 172.24.0.3:6789
ceph_rbd_monitors:
  - SHOULD_BE_REPLACED
ceph_rbd_admin_id: admin
# e.g. ceph_rbd_admin_secret: AQAnwihbXo+uDxAAD0HmWziVgTaAdai90IzZ6Q==
ceph_rbd_admin_secret: SHOULD_BE_REPLACED
ceph_rbd_pool: rbd
ceph_rbd_user_id: admin
# e.g. ceph_rbd_user_secret: AQAnwihbXo+uDxAAD0HmWziVgTaAdai90IzZ6Q==
ceph_rbd_user_secret: SHOULD_BE_REPLACED
ceph_rbd_fsType: ext4
ceph_rbd_imageFormat: 1
#ceph_rbd_imageFeatures: layering

# NFS-Client provisioner deployment
nfs_client_enable: false
nfs_client_is_default_class: false
# Hostname of the NFS server(ip or hostname)
nfs_server: SHOULD_BE_REPLACED
# Basepath of the mount point to be used
nfs_path: SHOULD_BE_REPLACED

# NeonSAN CSI
neonsan_csi_enabled: false
neonsan_csi_is_default_class: false
# csi-neonsan container option protocol: TCP or RDMA
neonsan_csi_protocol: TCP
# address of the NeonSAN server
neonsan_server_address: IP:PORT
# cluster_name of the NeonSAN server
neonsan_cluster_name: CLUSTER_NAME
# the name of the volume storage pool
neonsan_server_pool: kube
# NeonSAN image replica count
neonsan_server_replicas: 1
# set the increment of volumes size in GiB
neonsan_server_stepSize: 10
# the file system to use for the volume
neonsan_server_fsType: ext4
client_tcp_no_delay: 1
client_io_depth: 64
client_io_timeout: 30
conn_timeout: 8
open_volume_timeout: 180

# GlusterFS  provisioner deployment
glusterfs_provisioner_enabled: false
glusterfs_is_default_class: false
glusterfs_provisioner_storage_class: glusterfs
glusterfs_provisioner_restauthenabled: true
# e.g. glusterfs_provisioner_resturl: http://192.168.0.4:8080
glusterfs_provisioner_resturl: SHOULD_BE_REPLACED
# e.g. glusterfs_provisioner_clusterid: 6a6792ed25405eaa6302da99f2f5e24b
glusterfs_provisioner_clusterid: SHOULD_BE_REPLACED
glusterfs_provisioner_restuser: admin
glusterfs_provisioner_secretName: heketi-secret
glusterfs_provisioner_gidMin: 40000
glusterfs_provisioner_gidMax: 50000
glusterfs_provisioner_volumetype: replicate:2
# e.g. jwt_admin_key: 123456
jwt_admin_key: SHOULD_BE_REPLACED

######################################################################
# Cluster configuration
######################################################################
pkg_download_port: 5080
## Change this to use another Kubernetes version
ks_version: 2.0.0-dev
kube_version: v1.13.5
etcd_version: v3.2.18
openpitrix_version: v0.3.5
# Choose network plugin (calico or flannel)
kube_network_plugin: calico

# Kubernetes internal network for services, unused block of space.
kube_service_addresses: 10.33.0.0/18

# internal network. When used, it will assign IP
# addresses from this range to individual pods.
# This network must be unused in your network infrastructure!
kube_pods_subnet: 10.33.64.0/18

# Kube-proxy proxyMode configuration.
# Can be ipvs, iptables
kube_proxy_mode: ipvs

# Configure the amount of pods able to run on single node
# default is equal to application default
kubelet_max_pods: 110

# DNS configuration.
# Can be kubedns, coredns
dns_mode: coredns

# Access Port of KubeSphere
# 30000-32767 (30180/30280/30380 are not allowed)
console_port: 30880

## External LB example config
## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
loadbalancer_apiserver:
  address: 192.168.0.253
  port: 6443

# Monitoring
prometheus_memory_size: 400Mi
prometheus_volume_size: 20Gi

# Logging
kibana_enable: false
elasticsearch_volume_size: 20Gi

# Notification (Including Jenkins Notify)
EMAIL_SMTP_HOST: mail.app-center.com.cn
EMAIL_FROM_ADDR: admin@app-center.com.cn
EMAIL_FROM_NAME: KubeSphere Notify
EMAIL_USE_SSL: false
EMAIL_SMTP_PORT: 25
EMAIL_FROM_PASS: password

# Jenkins deployment

jenkins_memory_lim: 8Gi
jenkins_memory_req: 4Gi
Java_Opts: -Xms3g -Xmx6g -XX:MaxPermSize=512m -XX:MaxRAM=8g

JenkinsLocationUrl: jenkins.devops.kubesphere.local

# harbor deployment
harbor_enable: false
harbor_domain: harbor.devops.kubesphere.local

#GitLab deployment
gitlab_enable: false
gitlab_hosts_domain: gitlab.devops.kubesphere.local

## Container Engine Acceleration
## Use nvidia gpu acceleration in containers
# nvidia_accelerator_enabled: true
# nvidia_gpu_nodes:
#   - kube-gpu-001

## sonarqube
sonarqube_enable: true
## If you already have a sonar server,  please fill in the following parameters.
#sonar_server_url: SHOULD_BE_REPLACED
#sonar_server_token: SHOULD_BE_REPLACED
root@master1:~/kubesphere-all-advanced-2.0.0-dev/conf#
wnxn commented 5 years ago

@ss178994161 请用如下命令,看看结果。 kubectl get pvc --all-namespaces

rayzhou2017 commented 5 years ago

一般是存储问题