kubesphere / ks-installer

Install KubeSphere on existing Kubernetes cluster
https://kubesphere.io
Apache License 2.0
532 stars 746 forks source link

我在安装时,出现这个问题:could not find a ready tiller pod #93

Open Forest-L opened 5 years ago

Forest-L commented 5 years ago

我在安装时,出现这个问题: Wednesday 18 September 2019 01:24:28 -0400 (0:00:00.639) 0:02:32.879 *** fatal: [ks-allinone]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/helm upgrade --install ks-sonarqube /etc/kubesphere/sonarqube/sonarqube-0.13.5.tgz -f /etc/kubesphere/sonarqube/custom-values-sonarqube.yaml --namespace kubesphere-devops-system", "delta": "0:00:00.203654", "end": "2019-09-18 01:24:29.367462", "msg": "non-zero return code", "rc": 1, "start": "2019-09-18 01:24:29.163808", "stderr": "Error: could not find a ready tiller pod", "stderr_lines": ["Error: could not find a ready tiller pod"], "stdout": "", "stdout_lines": []}

PLAY RECAP ** ks-allinone : ok=172 changed=7 unreachable=0 failed=1

Originally posted by @yeyouqun in https://github.com/kubesphere/ks-installer/issues/23#issuecomment-532538305

Forest-L commented 5 years ago

1、机器配置是什么; 2、从报错信息来看,helm没有安装成功,kubectl get pod -n kube-system|grep tiller看下pod是否正常; 3、可以手动安装helm,否则重新跑下脚本。

Forest-L commented 5 years ago

https://github.com/kubesphere/kubesphere/issues/726#issue

yeyouqun commented 5 years ago

[root@ks-allinone ~]# helm version Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"} Error: could not find a ready tiller pod

[root@ks-allinone ~]# kubectl get pod -n kube-system|grep tiller tiller-deploy-dbb85cb99-9qv7r 0/1 Pending 0 18h

Forest-L commented 5 years ago

kubectl describe pod -n kube-system tiller-deploy-dbb85cb99-9qv7r看看日志

yeyouqun commented 5 years ago

[root@ks-allinone scripts]# kubectl describe pod -n kube-system tiller-deploy-dbb85cb99-9qv7r

Error from server (NotFound): pods "tiller-deploy-dbb85cb99-9qv7r" not found

发件人: noreply@github.com noreply@github.com 代表 Forest 发送时间: 2019年9月19日 9:48 收件人: kubesphere/ks-installer ks-installer@noreply.github.com 抄送: xdelta yeyouqun@163.com; Mention mention@noreply.github.com 主题: Re: [kubesphere/ks-installer] 我在安装时,出现这个问题:could not find a ready tiller pod (#93)

kubectl describe pod -n kube-system tiller-deploy-dbb85cb99-9qv7r看看日志

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubesphere/ks-installer/issues/93?email_source=notifications&email_token=ABDJ2KI53AGOOJDIPX2JFRDQKLK6FA5CNFSM4IX3ZFX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7B6BPI#issuecomment-532930749 , or mute the thread https://github.com/notifications/unsubscribe-auth/ABDJ2KPXUZS3SC4ZEM7H7JDQKLK6FANCNFSM4IX3ZFXQ . https://github.com/notifications/beacon/ABDJ2KOZ6JPY5OIS45L6TELQKLK6FA5CNFSM4IX3ZFX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7B6BPI.gif

Forest-L commented 5 years ago

kubectl describe pod -n kube-system kubectl get pod -n kube-system|grep tiller|awk '{print $1}'

yeyouqun commented 5 years ago

[root@ks-allinone scripts]# kubectl describe pod -n kube-system Name: calico-kube-controllers-fb99bb79d-284kv Namespace: kube-system Priority: 2000000000 PriorityClassName: system-cluster-critical Node: Labels: k8s-app=calico-kube-controllers kubernetes.io/cluster-service=true pod-template-hash=fb99bb79d Annotations: Status: Pending IP:
Controlled By: ReplicaSet/calico-kube-controllers-fb99bb79d Containers: calico-kube-controllers: Image: dockerhub.qingcloud.com/calico/kube-controllers:v3.1.3 Port: Host Port: Limits: cpu: 100m memory: 256M Requests: cpu: 30m memory: 64M Environment: ETCD_ENDPOINTS: https://172.17.111.190:2379 ETCD_CA_CERT_FILE: /etc/calico/certs/ca_cert.crt ETCD_CERT_FILE: /etc/calico/certs/cert.crt ETCD_KEY_FILE: /etc/calico/certs/key.pem Mounts: /etc/calico/certs from etcd-certs (ro) /var/run/secrets/kubernetes.io/serviceaccount from calico-kube-controllers-token-vgcvw (ro) Conditions: Type Status PodScheduled False Volumes: etcd-certs: Type: HostPath (bare host directory volume) Path: /etc/calico/certs HostPathType:
calico-kube-controllers-token-vgcvw: Type: Secret (a volume populated by a Secret) SecretName: calico-kube-controllers-token-vgcvw Optional: false QoS Class: Burstable Node-Selectors: Tolerations: :NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events:

Name: calico-node-6djdg Namespace: kube-system Priority: 2000001000 PriorityClassName: system-node-critical Node: ks-allinone/172.17.111.190 Start Time: Tue, 17 Sep 2019 23:06:39 -0400 Labels: controller-revision-hash=5bfcdd6f5d k8s-app=calico-node pod-template-generation=1 Annotations: kubespray.etcd-cert/serial: B34C7BD3DAD39578 prometheus.io/port: 9091 prometheus.io/scrape: true Status: Running IP: 172.17.111.190 Controlled By: DaemonSet/calico-node Containers: calico-node: Container ID: docker://946003e835a8fb1f5801e6255a4b7ec1dd70ece87d0cb0a62b24756f9d286cec Image: dockerhub.qingcloud.com/calico/node:v3.1.3 Image ID: docker-pullable://dockerhub.qingcloud.com/calico/node@sha256:9871f4dde9eab9fd804b12f3114da36505ff5c220e2323b7434eec24e3b23ac5 Port: Host Port: State: Running Started: Wed, 18 Sep 2019 20:46:01 -0400 Last State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 17 Sep 2019 23:06:43 -0400 Finished: Wed, 18 Sep 2019 20:45:45 -0400 Ready: True Restart Count: 1 Limits: cpu: 300m memory: 500M Requests: cpu: 150m memory: 64M Liveness: http-get http://127.0.0.1:9099/liveness delay=10s timeout=1s period=10s #success=1 #failure=6 Readiness: http-get http://127.0.0.1:9099/readiness delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: ETCD_ENDPOINTS: <set to the key 'etcd_endpoints' of config map 'calico-config'> Optional: false CALICO_NETWORKING_BACKEND: <set to the key 'calico_backend' of config map 'calico-config'> Optional: false CLUSTER_TYPE: <set to the key 'cluster_type' of config map 'calico-config'> Optional: false CALICO_K8S_NODE_REF: (v1:spec.nodeName) CALICO_DISABLE_FILE_LOGGING: true FELIX_DEFAULTENDPOINTTOHOSTACTION: RETURN FELIX_HEALTHHOST: localhost CALICO_IPV4POOL_IPIP: Off FELIX_IPV6SUPPORT: false FELIX_LOGSEVERITYSCREEN: info FELIX_PROMETHEUSMETRICSENABLED: false FELIX_PROMETHEUSMETRICSPORT: 9091 FELIX_PROMETHEUSGOMETRICSENABLED: true FELIX_PROMETHEUSPROCESSMETRICSENABLED: true ETCD_CA_CERT_FILE: <set to the key 'etcd_ca' of config map 'calico-config'> Optional: false ETCD_KEY_FILE: <set to the key 'etcd_key' of config map 'calico-config'> Optional: false ETCD_CERT_FILE: <set to the key 'etcd_cert' of config map 'calico-config'> Optional: false IP: (v1:status.hostIP) NODENAME: (v1:spec.nodeName) FELIX_HEALTHENABLED: true FELIX_IGNORELOOSERPF: False Mounts: /calico-secrets from etcd-certs (rw) /lib/modules from lib-modules (ro) /var/lib/calico from var-lib-calico (rw) /var/run/calico from var-run-calico (rw) /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-4fpdj (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: lib-modules: Type: HostPath (bare host directory volume) Path: /lib/modules HostPathType:
var-run-calico: Type: HostPath (bare host directory volume) Path: /var/run/calico HostPathType:
var-lib-calico: Type: HostPath (bare host directory volume) Path: /var/lib/calico HostPathType:
cni-bin-dir: Type: HostPath (bare host directory volume) Path: /opt/cni/bin HostPathType:
cni-net-dir: Type: HostPath (bare host directory volume) Path: /etc/cni/net.d HostPathType:
etcd-certs: Type: HostPath (bare host directory volume) Path: /etc/calico/certs HostPathType:
calico-node-token-4fpdj: Type: Secret (a volume populated by a Secret) SecretName: calico-node-token-4fpdj Optional: false QoS Class: Burstable Node-Selectors: Tolerations:
CriticalAddonsOnly node.kubernetes.io/disk-pressure:NoSchedule node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/network-unavailable:NoSchedule node.kubernetes.io/not-ready:NoExecute node.kubernetes.io/unreachable:NoExecute node.kubernetes.io/unschedulable:NoSchedule Events:

Name: coredns-bbbb94784-l69p2 Namespace: kube-system Priority: 2000000000 PriorityClassName: system-cluster-critical Node: Labels: k8s-app=coredns pod-template-hash=bbbb94784 Annotations: seccomp.security.alpha.kubernetes.io/pod: docker/default Status: Pending IP:
Controlled By: ReplicaSet/coredns-bbbb94784 Containers: coredns: Image: dockerhub.qingcloud.com/google_containers/coredns:1.4.0 Ports: 53/UDP, 53/TCP, 9153/TCP Host Ports: 0/UDP, 0/TCP, 0/TCP Args: -conf /etc/coredns/Corefile Limits: memory: 170Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: Mounts: /etc/coredns from config-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-z9sz7 (ro) Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: coredns Optional: false coredns-token-z9sz7: Type: Secret (a volume populated by a Secret) SecretName: coredns-token-z9sz7 Optional: false QoS Class: Burstable Node-Selectors: beta.kubernetes.io/os=linux Tolerations: CriticalAddonsOnly node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events:

Name: dns-autoscaler-89c7bbd57-wzs86 Namespace: kube-system Priority: 2000000000 PriorityClassName: system-cluster-critical Node: Labels: k8s-app=dns-autoscaler pod-template-hash=89c7bbd57 Annotations: seccomp.security.alpha.kubernetes.io/pod: docker/default Status: Pending IP:
Controlled By: ReplicaSet/dns-autoscaler-89c7bbd57 Containers: autoscaler: Image: dockerhub.qingcloud.com/google_containers/cluster-proportional-autoscaler-amd64:1.3.0 Port: Host Port: Command: /cluster-proportional-autoscaler --namespace=kube-system --default-params={"linear":{"preventSinglePointFailure":false,"coresPerReplica":20,"nodesPerReplica":10,"min":1}} --logtostderr=true --v=2 --configmap=dns-autoscaler --target=Deployment/coredns Requests: cpu: 20m memory: 10Mi Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from dns-autoscaler-token-fgddj (ro) Volumes: dns-autoscaler-token-fgddj: Type: Secret (a volume populated by a Secret) SecretName: dns-autoscaler-token-fgddj Optional: false QoS Class: Burstable Node-Selectors: beta.kubernetes.io/os=linux Tolerations: CriticalAddonsOnly node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events:

Name: kube-apiserver-ks-allinone Namespace: kube-system Priority: 2000000000 PriorityClassName: system-cluster-critical Node: ks-allinone/172.17.111.190 Start Time: Tue, 17 Sep 2019 23:05:09 -0400 Labels: component=kube-apiserver tier=control-plane Annotations: kubernetes.io/config.hash: 1a42c79c46e60744c61bb5dc9d286631 kubernetes.io/config.mirror: 1a42c79c46e60744c61bb5dc9d286631 kubernetes.io/config.seen: 2019-09-17T23:05:07.679711169-04:00 kubernetes.io/config.source: file scheduler.alpha.kubernetes.io/critical-pod: Status: Running IP: 172.17.111.190 Containers: kube-apiserver: Container ID: docker://e594aa4368b72eaa36fc61909b5e5d03af341bcfadf4d343a9260bc14345e9d6 Image: dockerhub.qingcloud.com/google_containers/hyperkube:v1.13.5 Image ID: docker-pullable://dockerhub.qingcloud.com/google_containers/hyperkube@sha256:548b8fa78eed795509184885ad6ddc6efc1e8cae3b21729982447ca124dfe364 Port: Host Port: Command: kube-apiserver --allow-privileged=true --apiserver-count=1 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --endpoint-reconciler-type=lease --feature-gates=KubeletPluginsWatcher=false,CSINodeInfo=false,CSIDriverRegistry=false,RotateKubeletClientCertificate=true --insecure-port=0 --kubelet-preferred-address-types=InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP --runtime-config=admissionregistration.k8s.io/v1alpha1 --service-node-port-range=30000-32767 --storage-backend=etcd3 --advertise-address=172.17.111.190 --client-ca-file=/etc/kubernetes/ssl/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem --etcd-certfile=/etc/ssl/etcd/ssl/node-ks-allinone.pem --etcd-keyfile=/etc/ssl/etcd/ssl/node-ks-allinone-key.pem --etcd-servers=https://172.17.111.190:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/ssl/apiserver-kubelet-client.key --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/ssl/sa.pub --service-cluster-ip-range=10.233.0.0/18 --tls-cert-file=/etc/kubernetes/ssl/apiserver.crt --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key State: Terminated Reason: Error Exit Code: 1 Started: Wed, 18 Sep 2019 22:29:48 -0400 Finished: Wed, 18 Sep 2019 22:29:49 -0400 Last State: Terminated Reason: Error Exit Code: 1 Started: Wed, 18 Sep 2019 22:24:40 -0400 Finished: Wed, 18 Sep 2019 22:24:40 -0400 Ready: False Restart Count: 25 Requests: cpu: 250m Liveness: http-get https://172.17.111.190:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8 Environment: Mounts: /etc/kubernetes/ssl from k8s-certs (ro) /etc/pki from etc-pki (ro) /etc/ssl/certs from ca-certs (ro) /etc/ssl/etcd/ssl from etcd-certs-0 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: ca-certs: Type: HostPath (bare host directory volume) Path: /etc/ssl/certs HostPathType: DirectoryOrCreate etc-pki: Type: HostPath (bare host directory volume) Path: /etc/pki HostPathType: DirectoryOrCreate etcd-certs-0: Type: HostPath (bare host directory volume) Path: /etc/ssl/etcd/ssl HostPathType: DirectoryOrCreate k8s-certs: Type: HostPath (bare host directory volume) Path: /etc/kubernetes/ssl HostPathType: DirectoryOrCreate QoS Class: Burstable Node-Selectors: Tolerations: :NoExecute Events:

Name: kube-controller-manager-ks-allinone Namespace: kube-system Priority: 2000000000 PriorityClassName: system-cluster-critical Node: ks-allinone/172.17.111.190 Start Time: Tue, 17 Sep 2019 23:05:09 -0400 Labels: component=kube-controller-manager tier=control-plane Annotations: kubernetes.io/config.hash: 0b2469e80be2053eb020184d4a0acf4f kubernetes.io/config.mirror: 0b2469e80be2053eb020184d4a0acf4f kubernetes.io/config.seen: 2019-09-17T23:05:07.679724609-04:00 kubernetes.io/config.source: file scheduler.alpha.kubernetes.io/critical-pod: Status: Running IP: 172.17.111.190 Containers: kube-controller-manager: Container ID: docker://1f5acdd54eb2279fe16b2f08380a61517a87a338702068fa9bd12ce8bb955beb Image: dockerhub.qingcloud.com/google_containers/hyperkube:v1.13.5 Image ID: docker-pullable://dockerhub.qingcloud.com/google_containers/hyperkube@sha256:548b8fa78eed795509184885ad6ddc6efc1e8cae3b21729982447ca124dfe364 Port: Host Port: Command: kube-controller-manager --address=0.0.0.0 --feature-gates=KubeletPluginsWatcher=false,CSINodeInfo=false,CSIDriverRegistry=false,RotateKubeletClientCertificate=true --node-cidr-mask-size=24 --node-monitor-grace-period=40s --node-monitor-period=5s --pod-eviction-timeout=5m0s --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --client-ca-file=/etc/kubernetes/ssl/ca.crt --cluster-cidr=10.233.64.0/18 --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.crt --cluster-signing-key-file=/etc/kubernetes/ssl/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/ssl/ca.crt --service-account-private-key-file=/etc/kubernetes/ssl/sa.key --use-service-account-credentials=true State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Wed, 18 Sep 2019 22:25:29 -0400 Finished: Wed, 18 Sep 2019 22:25:32 -0400 Ready: False Restart Count: 24 Requests: cpu: 200m Liveness: http-get http://0.0.0.0:10252/healthz delay=15s timeout=15s period=10s #success=1 #failure=8 Environment: Mounts: /etc/kubernetes/controller-manager.conf from kubeconfig (ro) /etc/kubernetes/ssl from k8s-certs (ro) /etc/pki from etc-pki (ro) /etc/ssl/certs from ca-certs (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: ca-certs: Type: HostPath (bare host directory volume) Path: /etc/ssl/certs HostPathType: DirectoryOrCreate etc-pki: Type: HostPath (bare host directory volume) Path: /etc/pki HostPathType: DirectoryOrCreate k8s-certs: Type: HostPath (bare host directory volume) Path: /etc/kubernetes/ssl HostPathType: DirectoryOrCreate kubeconfig: Type: HostPath (bare host directory volume) Path: /etc/kubernetes/controller-manager.conf HostPathType: FileOrCreate QoS Class: Burstable Node-Selectors: Tolerations: :NoExecute Events:

Name: kube-proxy-htzw7 Namespace: kube-system Priority: 2000001000 PriorityClassName: system-node-critical Node: Labels: controller-revision-hash=5f46bffb5c k8s-app=kube-proxy pod-template-generation=6 Annotations: scheduler.alpha.kubernetes.io/critical-pod: Status: Pending IP:
Controlled By: DaemonSet/kube-proxy Containers: kube-proxy: Image: dockerhub.qingcloud.com/google_containers/hyperkube:v1.13.5 Port: Host Port: Command: /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME) --hostname-override=${NODE_NAME} --hostname-override=${NODE_NAME} --hostname-override=${NODE_NAME} --hostname-override=${NODE_NAME} Environment: NODE_NAME: (v1:spec.nodeName) Mounts: /lib/modules from lib-modules (ro) /run/xtables.lock from xtables-lock (rw) /var/lib/kube-proxy from kube-proxy (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-vzswn (ro) Volumes: kube-proxy: Type: ConfigMap (a volume populated by a ConfigMap) Name: kube-proxy Optional: false xtables-lock: Type: HostPath (bare host directory volume) Path: /run/xtables.lock HostPathType: FileOrCreate lib-modules: Type: HostPath (bare host directory volume) Path: /lib/modules HostPathType:
kube-proxy-token-vzswn: Type: Secret (a volume populated by a Secret) SecretName: kube-proxy-token-vzswn Optional: false QoS Class: BestEffort Node-Selectors: beta.kubernetes.io/os=linux Tolerations:
CriticalAddonsOnly node.kubernetes.io/disk-pressure:NoSchedule node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/network-unavailable:NoSchedule node.kubernetes.io/not-ready:NoExecute node.kubernetes.io/unreachable:NoExecute node.kubernetes.io/unschedulable:NoSchedule Events:

Name: kube-scheduler-ks-allinone Namespace: kube-system Priority: 2000000000 PriorityClassName: system-cluster-critical Node: ks-allinone/ Start Time: Tue, 17 Sep 2019 23:05:09 -0400 Labels: component=kube-scheduler tier=control-plane Annotations: kubernetes.io/config.hash: 38106f5dc0568e0476cde8910ac70ecd kubernetes.io/config.mirror: 38106f5dc0568e0476cde8910ac70ecd kubernetes.io/config.seen: 2019-09-17T23:05:07.67972864-04:00 kubernetes.io/config.source: file scheduler.alpha.kubernetes.io/critical-pod: Status: Failed Reason: Preempting Message: Preempted in order to admit critical pod IP:
Containers: kube-scheduler: Image: dockerhub.qingcloud.com/google_containers/hyperkube:v1.13.5 Port: Host Port: Command: kube-scheduler --address=0.0.0.0 --feature-gates=KubeletPluginsWatcher=false,CSINodeInfo=false,CSIDriverRegistry=false,RotateKubeletClientCertificate=true --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true Requests: cpu: 100m Liveness: http-get http://0.0.0.0:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8 Environment: Mounts: /etc/kubernetes/scheduler.conf from kubeconfig (ro) Volumes: kubeconfig: Type: HostPath (bare host directory volume) Path: /etc/kubernetes/scheduler.conf HostPathType: FileOrCreate QoS Class: Burstable Node-Selectors: Tolerations: :NoExecute Events:

Name: tiller-deploy-6f9697dfd9-gblhf Namespace: kube-system Priority: 2000000000 PriorityClassName: system-cluster-critical Node: Labels: app=helm name=tiller pod-template-hash=6f9697dfd9 Annotations: Status: Pending IP:
Controlled By: ReplicaSet/tiller-deploy-6f9697dfd9 Containers: tiller: Image: dockerhub.qingcloud.com/kubernetes_helm/tiller:v2.12.3 Ports: 44134/TCP, 44135/TCP Host Ports: 0/TCP, 0/TCP Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3 Environment: TILLER_NAMESPACE: kube-system TILLER_HISTORY_MAX: 0 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from tiller-token-55hq4 (ro) Volumes: tiller-token-55hq4: Type: Secret (a volume populated by a Secret) SecretName: tiller-token-55hq4 Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: [root@ks-allinone scripts]# kubectl get pod -n kube-system|grep tiller|awk '{print $1}' tiller-deploy-6f9697dfd9-gblhf

Forest-L commented 5 years ago

机器配置是什么? 1、可以提供tv或者机器信息发到kubesphere邮箱,远程看下。 @yeyouqun

yeyouqun commented 5 years ago

[root@ks-allinone scripts]# cat /proc/meminfo

MemTotal: 1882620 kB

processor : 0

vendor_id : GenuineIntel

cpu family : 6

model : 45

model name : Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz

stepping : 7

……

是一个虚拟机,2G内存,单CPU。安装方式是 Allinone,

[root@ks-allinone scripts]# cat /etc/os-release

NAME="CentOS Linux"

VERSION="7 (Core)"

ID="centos"

ID_LIKE="rhel fedora"

VERSION_ID="7"

PRETTY_NAME="CentOS Linux 7 (Core)"

ANSI_COLOR="0;31"

CPE_NAME="cpe:/o:centos:centos:7"

HOME_URL="https://www.centos.org/"

BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"

CENTOS_MANTISBT_PROJECT_VERSION="7"

REDHAT_SUPPORT_PRODUCT="centos"

REDHAT_SUPPORT_PRODUCT_VERSION="7"

今天已经 yum update 了。

内核版本:

Linux ks-allinone 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

TV:

1 402 707 002/ 6z7t9x

在 XShell 远程中。

发件人: noreply@github.com noreply@github.com 代表 Forest 发送时间: 2019年9月19日 11:27 收件人: kubesphere/ks-installer ks-installer@noreply.github.com 抄送: xdelta yeyouqun@163.com; Mention mention@noreply.github.com 主题: Re: [kubesphere/ks-installer] 我在安装时,出现这个问题:could not find a ready tiller pod (#93)

机器配置是什么? 1、可以提供tv或者机器信息发到kubesphere邮箱,远程看下。 @yeyouqun https://github.com/yeyouqun

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubesphere/ks-installer/issues/93?email_source=notifications&email_token=ABDJ2KNDNKGTXOMHRNBJ6YDQKLWP7A5CNFSM4IX3ZFX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7CCUEY#issuecomment-532949523 , or mute the thread https://github.com/notifications/unsubscribe-auth/ABDJ2KIZDJGCOM3FYJTMW73QKLWP7ANCNFSM4IX3ZFXQ . https://github.com/notifications/beacon/ABDJ2KP5R5U22HLKLATSG7TQKLWP7A5CNFSM4IX3ZFX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7CCUEY.gif

Forest-L commented 5 years ago

cpu核数够了,内存太小了,加大一下内存,大概要10G

yeyouqun commented 5 years ago

好的,我再试试。

Forest-L commented 5 years ago

tv信息不要发这里,邮箱即可,tv版本有点高; 如果没有那么大的内存的话,可以参考官网的配置,使某些模块不安装,先体验下kubesphere.

yeyouqun commented 5 years ago

现在是另外的问题了:fatal: [ks-allinone]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/helm upgrade --install ks-sonarqube /etc/kubesphere/sonarqube/sonarqube-0.13.5.tgz -f /etc/kubesphere/sonarqube/custom-values-sonarqube.yaml --namespace kubesphere-devops-system", "delta": "0:00:02.446402", "end": "2019-09-19 01:02:51.264600", "msg": "non-zero return code", "rc": 1, "start": "2019-09-19 01:02:48.818198", "stderr": "Error: UPGRADE FAILED: Get https://10.233.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%!D(MISSING)ks-sonarqube%!C(MISSING)OWNER%!D(MISSING)TILLER%!C(MISSING)STATUS%!D(MISSING)DEPLOYED: dial tcp 10.233.0.1:443: connect: no route to host", "stderr_lines": ["Error: UPGRADE FAILED: Get https://10.233.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%!D(MISSING)ks-sonarqube%!C(MISSING)OWNER%!D(MISSING)TILLER%!C(MISSING)STATUS%!D(MISSING)DEPLOYED: dial tcp 10.233.0.1:443: connect: no route to host"], "stdout": "", "stdout_lines": []}

yeyouqun commented 5 years ago

[root@ks-allinone scripts]# ping 10.233.0.1 PING 10.233.0.1 (10.233.0.1) 56(84) bytes of data. 64 bytes from 10.233.0.1: icmp_seq=1 ttl=64 time=0.154 ms 64 bytes from 10.233.0.1: icmp_seq=2 ttl=64 time=0.120 ms

Forest-L commented 5 years ago

1、检查防火墙关了没? 2、检查dns里面是否有ping不通的IP @yeyouqun

yeyouqun commented 5 years ago

FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (2 retries left). FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (1 retries left). fatal: [ks-allinone]: FAILED! => {"attempts": 15, "changed": true, "cmd": "/usr/local/bin/kubectl -n openpitrix-system get pod | grep openpitrix-db-deployment | awk '{print $3}'", "delta": "0:00:00.279886", "end": "2019-09-19 03:22:20.411033", "rc": 0, "start": "2019-09-19 03:22:20.131147", "stderr": "", "stderr_lines": [], "stdout": "Pending", "stdout_lines": ["Pending"]}

PLAY RECAP ** ks-allinone : ok=256 changed=83 unreachable=0 failed=1

[root@ks-allinone scripts]# kubectl -n openpitrix-system get pod NAME READY STATUS RESTARTS AGE openpitrix-api-gateway-deployment-587cc46874-4w8nt 0/1 Pending 0 9m25s openpitrix-app-manager-deployment-595dcd76f-w5bgg 0/1 Pending 0 9m24s openpitrix-category-manager-deployment-7968d789d6-88b7k 0/1 Pending 0 9m24s openpitrix-cluster-manager-deployment-6dcd96b68b-g6m2f 0/1 Pending 0 9m24s openpitrix-db-deployment-66dbfbd7bd-7xt5f 0/1 Pending 0 9m27s openpitrix-etcd-deployment-54bc9bb948-2m445 0/1 Pending 0 9m27s openpitrix-iam-service-deployment-864df9fb6f-bc49r 0/1 Pending 0 9m23s openpitrix-job-manager-deployment-588858bcb9-svkn7 0/1 Pending 0 9m23s openpitrix-minio-deployment-84d5f9c94b-btnj7 0/1 Pending 0 9m26s openpitrix-repo-indexer-deployment-5f4c895b54-nsv6v 0/1 Pending 0 9m22s openpitrix-repo-manager-deployment-84fd5b5fdf-56k7q 0/1 Pending 0 9m23s openpitrix-runtime-manager-deployment-5fcbb6f447-p2grt 0/1 Pending 0 9m22s openpitrix-task-manager-deployment-59578dc9d6-bx674 0/1 Pending 0 9m22s

Forest-L commented 5 years ago

1、openpitrix-db-deployment-66dbfbd7bd-7xt5f 这个pod的日志; 2、检查下内存,且释放下缓存内存:echo 3 >/proc/sys/vm/drop_caches;

yeyouqun commented 5 years ago

FAILED - RETRYING: docker login (1 retries left). fatal: [ks-allinone]: FAILED! => {"attempts": 5, "changed": true, "cmd": "docker login -u guest -p guest dockerhub.qingcloud.com", "delta": "0:00:00.156557", "end": "2019-09-19 03:42:28.109058", "msg": "non-zero return code", "rc": 1, "start": "2019-09-19 03:42:27.952501", "stderr": "WARNING! Using --password via the CLI is insecure. Use --password-stdin.\nError response from daemon: Get https://dockerhub.qingcloud.com/v2/: dial tcp: lookup dockerhub.qingcloud.com on [::1]:53: read udp [::1]:48726->[::1]:53: read: connection refused", "stderr_lines": ["WARNING! Using --password via the CLI is insecure. Use --password-stdin.", "Error response from daemon: Get https://dockerhub.qingcloud.com/v2/: dial tcp: lookup dockerhub.qingcloud.com on [::1]:53: read udp [::1]:48726->[::1]:53: read: connection refused"], "stdout": "", "stdout_lines": []}

yeyouqun commented 5 years ago

TASK [bootstrap-os : Check python-pip package] ** Thursday 19 September 2019 03:46:29 -0400 (0:00:02.104) 0:00:11.131 **** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: need more than 1 value to unpack fatal: [ks-allinone]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1568879189.63-155626075350872/AnsiballZ_yum.py\", line 113, in \n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1568879189.63-155626075350872/AnsiballZ_yum.py\", line 105, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1568879189.63-155626075350872/AnsiballZ_yum.py\", line 48, in invoke_module\n imp.load_module('main', mod, module, MOD_DESC)\n File \"/tmp/ansible_yum_payload_rLkRhJ/main.py\", line 1572, in \n File \"/tmp/ansible_yum_payload_rLkRhJ/main.py\", line 1568, in main\n File \"/tmp/ansible_yum_payload_rLkRhJ/main.py\", line 1517, in run\n File \"/tmp/ansible_yum_payload_rLkRhJ/main.py\", line 813, in list_stuff\n File \"/tmp/ansible_yum_payload_rLkRhJ/main.py\", line 754, in pkg_to_dict\nValueError: need more than 1 value to unpack\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

Forest-L commented 5 years ago

版本12的tv发下kubesphere官方邮箱哦,连上去看下

yeyouqun commented 5 years ago

官邮是否为:kubesphere@kubesphere.io?

发件人: noreply@github.com noreply@github.com 代表 Forest 发送时间: 2019年9月19日 16:31 收件人: kubesphere/ks-installer ks-installer@noreply.github.com 抄送: xdelta yeyouqun@163.com; Mention mention@noreply.github.com 主题: Re: [kubesphere/ks-installer] 我在安装时,出现这个问题:could not find a ready tiller pod (#93)

版本12的tv发下kubesphere官方邮箱哦,连上去看下

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubesphere/ks-installer/issues/93?email_source=notifications&email_token=ABDJ2KOPVFEMQBCOGA6D4WDQKM2EVA5CNFSM4IX3ZFX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7CVRTY#issuecomment-533027023 , or mute the thread https://github.com/notifications/unsubscribe-auth/ABDJ2KNRKJSXT4XHIHCHED3QKM2EVANCNFSM4IX3ZFXQ . https://github.com/notifications/beacon/ABDJ2KNZL3SJ2YVAEGF3IH3QKM2EVA5CNFSM4IX3ZFX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7CVRTY.gif

Forest-L commented 5 years ago

kubesphere@yunify.com @yeyouqun

newyue588cc commented 5 years ago

配置中未看到,如果helm启用了tls,如何把helm的certs增加进去?

Forest-L commented 5 years ago

https://docs.azure.cn/zh-cn/aks/ingress-own-tls 创建入口路由处 不知道是否可以帮到你 @newyue588cc

Forest-L commented 5 years ago

https://github.com/helm/charts/blob/master/stable/magic-namespace/templates/tiller-deployment.yaml @newyue588cc 官方添加方法

newyue588cc commented 5 years ago

https://github.com/helm/charts/blob/master/stable/magic-namespace/templates/tiller-deployment.yaml @newyue588cc 官方添加方法

我可以创建secret,我看我们ks-installer安装是在job里面运行各种ansible的task,我可以创建secret,定义个名称,在deploy/kubesphere-yaml中引入secret,但是我们在运行helm的时候需要增加--tls命令,否则会夯住的。这个可能需要重新打包ks-installer的image