Open Forest-L opened 5 years ago
1、机器配置是什么; 2、从报错信息来看,helm没有安装成功,kubectl get pod -n kube-system|grep tiller看下pod是否正常; 3、可以手动安装helm,否则重新跑下脚本。
[root@ks-allinone ~]# helm version Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"} Error: could not find a ready tiller pod
[root@ks-allinone ~]# kubectl get pod -n kube-system|grep tiller tiller-deploy-dbb85cb99-9qv7r 0/1 Pending 0 18h
kubectl describe pod -n kube-system tiller-deploy-dbb85cb99-9qv7r
看看日志
[root@ks-allinone scripts]# kubectl describe pod -n kube-system tiller-deploy-dbb85cb99-9qv7r
Error from server (NotFound): pods "tiller-deploy-dbb85cb99-9qv7r" not found
发件人: noreply@github.com noreply@github.com 代表 Forest 发送时间: 2019年9月19日 9:48 收件人: kubesphere/ks-installer ks-installer@noreply.github.com 抄送: xdelta yeyouqun@163.com; Mention mention@noreply.github.com 主题: Re: [kubesphere/ks-installer] 我在安装时,出现这个问题:could not find a ready tiller pod (#93)
kubectl describe pod -n kube-system tiller-deploy-dbb85cb99-9qv7r看看日志
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubesphere/ks-installer/issues/93?email_source=notifications&email_token=ABDJ2KI53AGOOJDIPX2JFRDQKLK6FA5CNFSM4IX3ZFX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7B6BPI#issuecomment-532930749 , or mute the thread https://github.com/notifications/unsubscribe-auth/ABDJ2KPXUZS3SC4ZEM7H7JDQKLK6FANCNFSM4IX3ZFXQ . https://github.com/notifications/beacon/ABDJ2KOZ6JPY5OIS45L6TELQKLK6FA5CNFSM4IX3ZFX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7B6BPI.gif
kubectl describe pod -n kube-system kubectl get pod -n kube-system|grep tiller|awk '{print $1}'
[root@ks-allinone scripts]# kubectl describe pod -n kube-system
Name: calico-kube-controllers-fb99bb79d-284kv
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node:
Controlled By: ReplicaSet/calico-kube-controllers-fb99bb79d
Containers:
calico-kube-controllers:
Image: dockerhub.qingcloud.com/calico/kube-controllers:v3.1.3
Port:
calico-kube-controllers-token-vgcvw:
Type: Secret (a volume populated by a Secret)
SecretName: calico-kube-controllers-token-vgcvw
Optional: false
QoS Class: Burstable
Node-Selectors:
Name: calico-node-6djdg
Namespace: kube-system
Priority: 2000001000
PriorityClassName: system-node-critical
Node: ks-allinone/172.17.111.190
Start Time: Tue, 17 Sep 2019 23:06:39 -0400
Labels: controller-revision-hash=5bfcdd6f5d
k8s-app=calico-node
pod-template-generation=1
Annotations: kubespray.etcd-cert/serial: B34C7BD3DAD39578
prometheus.io/port: 9091
prometheus.io/scrape: true
Status: Running
IP: 172.17.111.190
Controlled By: DaemonSet/calico-node
Containers:
calico-node:
Container ID: docker://946003e835a8fb1f5801e6255a4b7ec1dd70ece87d0cb0a62b24756f9d286cec
Image: dockerhub.qingcloud.com/calico/node:v3.1.3
Image ID: docker-pullable://dockerhub.qingcloud.com/calico/node@sha256:9871f4dde9eab9fd804b12f3114da36505ff5c220e2323b7434eec24e3b23ac5
Port:
var-run-calico:
Type: HostPath (bare host directory volume)
Path: /var/run/calico
HostPathType:
var-lib-calico:
Type: HostPath (bare host directory volume)
Path: /var/lib/calico
HostPathType:
cni-bin-dir:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
HostPathType:
cni-net-dir:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
etcd-certs:
Type: HostPath (bare host directory volume)
Path: /etc/calico/certs
HostPathType:
calico-node-token-4fpdj:
Type: Secret (a volume populated by a Secret)
SecretName: calico-node-token-4fpdj
Optional: false
QoS Class: Burstable
Node-Selectors:
CriticalAddonsOnly
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/network-unavailable:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Name: coredns-bbbb94784-l69p2
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node:
Controlled By: ReplicaSet/coredns-bbbb94784
Containers:
coredns:
Image: dockerhub.qingcloud.com/google_containers/coredns:1.4.0
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Environment:
Name: dns-autoscaler-89c7bbd57-wzs86
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node:
Controlled By: ReplicaSet/dns-autoscaler-89c7bbd57
Containers:
autoscaler:
Image: dockerhub.qingcloud.com/google_containers/cluster-proportional-autoscaler-amd64:1.3.0
Port:
Name: kube-apiserver-ks-allinone
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: ks-allinone/172.17.111.190
Start Time: Tue, 17 Sep 2019 23:05:09 -0400
Labels: component=kube-apiserver
tier=control-plane
Annotations: kubernetes.io/config.hash: 1a42c79c46e60744c61bb5dc9d286631
kubernetes.io/config.mirror: 1a42c79c46e60744c61bb5dc9d286631
kubernetes.io/config.seen: 2019-09-17T23:05:07.679711169-04:00
kubernetes.io/config.source: file
scheduler.alpha.kubernetes.io/critical-pod:
Status: Running
IP: 172.17.111.190
Containers:
kube-apiserver:
Container ID: docker://e594aa4368b72eaa36fc61909b5e5d03af341bcfadf4d343a9260bc14345e9d6
Image: dockerhub.qingcloud.com/google_containers/hyperkube:v1.13.5
Image ID: docker-pullable://dockerhub.qingcloud.com/google_containers/hyperkube@sha256:548b8fa78eed795509184885ad6ddc6efc1e8cae3b21729982447ca124dfe364
Port:
Name: kube-controller-manager-ks-allinone
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: ks-allinone/172.17.111.190
Start Time: Tue, 17 Sep 2019 23:05:09 -0400
Labels: component=kube-controller-manager
tier=control-plane
Annotations: kubernetes.io/config.hash: 0b2469e80be2053eb020184d4a0acf4f
kubernetes.io/config.mirror: 0b2469e80be2053eb020184d4a0acf4f
kubernetes.io/config.seen: 2019-09-17T23:05:07.679724609-04:00
kubernetes.io/config.source: file
scheduler.alpha.kubernetes.io/critical-pod:
Status: Running
IP: 172.17.111.190
Containers:
kube-controller-manager:
Container ID: docker://1f5acdd54eb2279fe16b2f08380a61517a87a338702068fa9bd12ce8bb955beb
Image: dockerhub.qingcloud.com/google_containers/hyperkube:v1.13.5
Image ID: docker-pullable://dockerhub.qingcloud.com/google_containers/hyperkube@sha256:548b8fa78eed795509184885ad6ddc6efc1e8cae3b21729982447ca124dfe364
Port:
Name: kube-proxy-htzw7
Namespace: kube-system
Priority: 2000001000
PriorityClassName: system-node-critical
Node:
Controlled By: DaemonSet/kube-proxy
Containers:
kube-proxy:
Image: dockerhub.qingcloud.com/google_containers/hyperkube:v1.13.5
Port:
kube-proxy-token-vzswn:
Type: Secret (a volume populated by a Secret)
SecretName: kube-proxy-token-vzswn
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations:
CriticalAddonsOnly
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/network-unavailable:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Name: kube-scheduler-ks-allinone
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: ks-allinone/
Start Time: Tue, 17 Sep 2019 23:05:09 -0400
Labels: component=kube-scheduler
tier=control-plane
Annotations: kubernetes.io/config.hash: 38106f5dc0568e0476cde8910ac70ecd
kubernetes.io/config.mirror: 38106f5dc0568e0476cde8910ac70ecd
kubernetes.io/config.seen: 2019-09-17T23:05:07.67972864-04:00
kubernetes.io/config.source: file
scheduler.alpha.kubernetes.io/critical-pod:
Status: Failed
Reason: Preempting
Message: Preempted in order to admit critical pod
IP:
Containers:
kube-scheduler:
Image: dockerhub.qingcloud.com/google_containers/hyperkube:v1.13.5
Port:
Name: tiller-deploy-6f9697dfd9-gblhf
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node:
Controlled By: ReplicaSet/tiller-deploy-6f9697dfd9
Containers:
tiller:
Image: dockerhub.qingcloud.com/kubernetes_helm/tiller:v2.12.3
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from tiller-token-55hq4 (ro)
Volumes:
tiller-token-55hq4:
Type: Secret (a volume populated by a Secret)
SecretName: tiller-token-55hq4
Optional: false
QoS Class: BestEffort
Node-Selectors:
机器配置是什么? 1、可以提供tv或者机器信息发到kubesphere邮箱,远程看下。 @yeyouqun
[root@ks-allinone scripts]# cat /proc/meminfo
MemTotal: 1882620 kB
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 45
model name : Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
stepping : 7
……
是一个虚拟机,2G内存,单CPU。安装方式是 Allinone,
[root@ks-allinone scripts]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
今天已经 yum update 了。
内核版本:
Linux ks-allinone 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
TV:
1 402 707 002/ 6z7t9x
在 XShell 远程中。
发件人: noreply@github.com noreply@github.com 代表 Forest 发送时间: 2019年9月19日 11:27 收件人: kubesphere/ks-installer ks-installer@noreply.github.com 抄送: xdelta yeyouqun@163.com; Mention mention@noreply.github.com 主题: Re: [kubesphere/ks-installer] 我在安装时,出现这个问题:could not find a ready tiller pod (#93)
机器配置是什么? 1、可以提供tv或者机器信息发到kubesphere邮箱,远程看下。 @yeyouqun https://github.com/yeyouqun
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubesphere/ks-installer/issues/93?email_source=notifications&email_token=ABDJ2KNDNKGTXOMHRNBJ6YDQKLWP7A5CNFSM4IX3ZFX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7CCUEY#issuecomment-532949523 , or mute the thread https://github.com/notifications/unsubscribe-auth/ABDJ2KIZDJGCOM3FYJTMW73QKLWP7ANCNFSM4IX3ZFXQ . https://github.com/notifications/beacon/ABDJ2KP5R5U22HLKLATSG7TQKLWP7A5CNFSM4IX3ZFX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7CCUEY.gif
cpu核数够了,内存太小了,加大一下内存,大概要10G
好的,我再试试。
tv信息不要发这里,邮箱即可,tv版本有点高; 如果没有那么大的内存的话,可以参考官网的配置,使某些模块不安装,先体验下kubesphere.
现在是另外的问题了:fatal: [ks-allinone]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/helm upgrade --install ks-sonarqube /etc/kubesphere/sonarqube/sonarqube-0.13.5.tgz -f /etc/kubesphere/sonarqube/custom-values-sonarqube.yaml --namespace kubesphere-devops-system", "delta": "0:00:02.446402", "end": "2019-09-19 01:02:51.264600", "msg": "non-zero return code", "rc": 1, "start": "2019-09-19 01:02:48.818198", "stderr": "Error: UPGRADE FAILED: Get https://10.233.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%!D(MISSING)ks-sonarqube%!C(MISSING)OWNER%!D(MISSING)TILLER%!C(MISSING)STATUS%!D(MISSING)DEPLOYED: dial tcp 10.233.0.1:443: connect: no route to host", "stderr_lines": ["Error: UPGRADE FAILED: Get https://10.233.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%!D(MISSING)ks-sonarqube%!C(MISSING)OWNER%!D(MISSING)TILLER%!C(MISSING)STATUS%!D(MISSING)DEPLOYED: dial tcp 10.233.0.1:443: connect: no route to host"], "stdout": "", "stdout_lines": []}
[root@ks-allinone scripts]# ping 10.233.0.1 PING 10.233.0.1 (10.233.0.1) 56(84) bytes of data. 64 bytes from 10.233.0.1: icmp_seq=1 ttl=64 time=0.154 ms 64 bytes from 10.233.0.1: icmp_seq=2 ttl=64 time=0.120 ms
1、检查防火墙关了没? 2、检查dns里面是否有ping不通的IP @yeyouqun
FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (2 retries left). FAILED - RETRYING: OpenPitrix | Waiting for openpitrix-db (1 retries left). fatal: [ks-allinone]: FAILED! => {"attempts": 15, "changed": true, "cmd": "/usr/local/bin/kubectl -n openpitrix-system get pod | grep openpitrix-db-deployment | awk '{print $3}'", "delta": "0:00:00.279886", "end": "2019-09-19 03:22:20.411033", "rc": 0, "start": "2019-09-19 03:22:20.131147", "stderr": "", "stderr_lines": [], "stdout": "Pending", "stdout_lines": ["Pending"]}
PLAY RECAP ** ks-allinone : ok=256 changed=83 unreachable=0 failed=1
[root@ks-allinone scripts]# kubectl -n openpitrix-system get pod NAME READY STATUS RESTARTS AGE openpitrix-api-gateway-deployment-587cc46874-4w8nt 0/1 Pending 0 9m25s openpitrix-app-manager-deployment-595dcd76f-w5bgg 0/1 Pending 0 9m24s openpitrix-category-manager-deployment-7968d789d6-88b7k 0/1 Pending 0 9m24s openpitrix-cluster-manager-deployment-6dcd96b68b-g6m2f 0/1 Pending 0 9m24s openpitrix-db-deployment-66dbfbd7bd-7xt5f 0/1 Pending 0 9m27s openpitrix-etcd-deployment-54bc9bb948-2m445 0/1 Pending 0 9m27s openpitrix-iam-service-deployment-864df9fb6f-bc49r 0/1 Pending 0 9m23s openpitrix-job-manager-deployment-588858bcb9-svkn7 0/1 Pending 0 9m23s openpitrix-minio-deployment-84d5f9c94b-btnj7 0/1 Pending 0 9m26s openpitrix-repo-indexer-deployment-5f4c895b54-nsv6v 0/1 Pending 0 9m22s openpitrix-repo-manager-deployment-84fd5b5fdf-56k7q 0/1 Pending 0 9m23s openpitrix-runtime-manager-deployment-5fcbb6f447-p2grt 0/1 Pending 0 9m22s openpitrix-task-manager-deployment-59578dc9d6-bx674 0/1 Pending 0 9m22s
1、openpitrix-db-deployment-66dbfbd7bd-7xt5f 这个pod的日志; 2、检查下内存,且释放下缓存内存:echo 3 >/proc/sys/vm/drop_caches;
FAILED - RETRYING: docker login (1 retries left). fatal: [ks-allinone]: FAILED! => {"attempts": 5, "changed": true, "cmd": "docker login -u guest -p guest dockerhub.qingcloud.com", "delta": "0:00:00.156557", "end": "2019-09-19 03:42:28.109058", "msg": "non-zero return code", "rc": 1, "start": "2019-09-19 03:42:27.952501", "stderr": "WARNING! Using --password via the CLI is insecure. Use --password-stdin.\nError response from daemon: Get https://dockerhub.qingcloud.com/v2/: dial tcp: lookup dockerhub.qingcloud.com on [::1]:53: read udp [::1]:48726->[::1]:53: read: connection refused", "stderr_lines": ["WARNING! Using --password via the CLI is insecure. Use --password-stdin.", "Error response from daemon: Get https://dockerhub.qingcloud.com/v2/: dial tcp: lookup dockerhub.qingcloud.com on [::1]:53: read udp [::1]:48726->[::1]:53: read: connection refused"], "stdout": "", "stdout_lines": []}
TASK [bootstrap-os : Check python-pip package] **
Thursday 19 September 2019 03:46:29 -0400 (0:00:02.104) 0:00:11.131 ****
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: need more than 1 value to unpack
fatal: [ks-allinone]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1568879189.63-155626075350872/AnsiballZ_yum.py\", line 113, in
版本12的tv发下kubesphere官方邮箱哦,连上去看下
官邮是否为:kubesphere@kubesphere.io?
发件人: noreply@github.com noreply@github.com 代表 Forest 发送时间: 2019年9月19日 16:31 收件人: kubesphere/ks-installer ks-installer@noreply.github.com 抄送: xdelta yeyouqun@163.com; Mention mention@noreply.github.com 主题: Re: [kubesphere/ks-installer] 我在安装时,出现这个问题:could not find a ready tiller pod (#93)
版本12的tv发下kubesphere官方邮箱哦,连上去看下
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubesphere/ks-installer/issues/93?email_source=notifications&email_token=ABDJ2KOPVFEMQBCOGA6D4WDQKM2EVA5CNFSM4IX3ZFX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7CVRTY#issuecomment-533027023 , or mute the thread https://github.com/notifications/unsubscribe-auth/ABDJ2KNRKJSXT4XHIHCHED3QKM2EVANCNFSM4IX3ZFXQ . https://github.com/notifications/beacon/ABDJ2KNZL3SJ2YVAEGF3IH3QKM2EVA5CNFSM4IX3ZFX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7CVRTY.gif
kubesphere@yunify.com @yeyouqun
配置中未看到,如果helm启用了tls,如何把helm的certs增加进去?
https://docs.azure.cn/zh-cn/aks/ingress-own-tls 创建入口路由处 不知道是否可以帮到你 @newyue588cc
https://github.com/helm/charts/blob/master/stable/magic-namespace/templates/tiller-deployment.yaml @newyue588cc 官方添加方法
我可以创建secret,我看我们ks-installer安装是在job里面运行各种ansible的task,我可以创建secret,定义个名称,在deploy/kubesphere-yaml中引入secret,但是我们在运行helm的时候需要增加--tls命令,否则会夯住的。这个可能需要重新打包ks-installer的image
我在安装时,出现这个问题: Wednesday 18 September 2019 01:24:28 -0400 (0:00:00.639) 0:02:32.879 *** fatal: [ks-allinone]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/helm upgrade --install ks-sonarqube /etc/kubesphere/sonarqube/sonarqube-0.13.5.tgz -f /etc/kubesphere/sonarqube/custom-values-sonarqube.yaml --namespace kubesphere-devops-system", "delta": "0:00:00.203654", "end": "2019-09-18 01:24:29.367462", "msg": "non-zero return code", "rc": 1, "start": "2019-09-18 01:24:29.163808", "stderr": "Error: could not find a ready tiller pod", "stderr_lines": ["Error: could not find a ready tiller pod"], "stdout": "", "stdout_lines": []}
PLAY RECAP ** ks-allinone : ok=172 changed=7 unreachable=0 failed=1
Originally posted by @yeyouqun in https://github.com/kubesphere/ks-installer/issues/23#issuecomment-532538305