Open fanux opened 5 years ago
@jones-gao 安装后,仅一个节点启动 (sealos version 3.3.2) [root@lt06c01m011 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION lt06c01m011 Ready master 7m50s v1.18.0
安装脚本如下: sealos init --passwd Cisco@123 \
--master 172.32.101.11 --master 172.32.101.12 --master 172.32.101.13 \ --node 172.32.101.15 \ --pkg-url http://10.138.1.238/kube1.18.0.tar.gz \ --version v1.18.0 >>'/tmp/sealos-install.log3'
/invite jones-gao
/invite jones-gao
@jones-gao 请加QQ群:98488045 详细探讨一下,或者加微信:sealnux 拉你进微信群
重要公告 1.18.1 1.18.0 社区有bug。不推荐生产环境中使用
1.15.11 1.16.8 1.17.4 1.18.1均已经修复lvscare bug并使用sealos v3.3.3版本进行重新打包
有人在吗 69元是什么包都可以下载嘛
我加 sealnux 了 拉我进群吧
我买了会员了 进不了任何群
command result is: bash: kubeadm: command not found 我安装就报这个错误
@kingflyok command result is: bash: kubeadm: command not found 我安装就报这个错误
你这应该是解压出现错误了 /invite kingflyok
/invite kingflyok
@kingflyok 请加QQ群:98488045 详细探讨一下,或者加微信:sealnux 拉你进微信群
请问sealos clean 没反应是什么情况
@timyl 请问sealos clean 没反应是什么情况
可能是.sealos/config.yaml中ip已经被清了 可以手动 --node指定节点ip进行清理
离线包居然要收费?那还是开源项目吗?
@F-liuhui 离线包居然要收费?那还是开源项目吗?
开源与付费不冲突,100%开源 100%付费
想测试测试 发现还得付费。
@llussy 想测试测试 发现还得付费。
用1.18.0 免费
sealos 版本 version v3.3.6, build e99f236 go1.13.5 k8s版本 17.5 在执行 sealos join --node 时不能加入节点,提示: DEBG] [sealos.go:84] [globals]decodeOutput: W0517 18:25:19.747923 14102 validation.go:28] Cannot validate kube-proxy config - no validator is available 请问是什么原因?
[init.go:62] your pkg-url is empty,please check your command is ok
@wl198972 [init.go:62] your pkg-url is empty,please check your command is ok
少了pkg-url参数
购买1.18.4离线安装包,通过sealos安装3个master,3个计算节点的集群。 成功后发现第2、3个master节点都无法使用kubelet查看节点信息: $kubectl get nodes
The connection to the server apiserver.cluster.local:6443 was refused - did you specify the right host or port?
在第1个master查看只看自己这个节点信息。
@wanggancheng 购买1.18.4离线安装包,通过sealos安装3个master,3个计算节点的集群。 成功后发现第2、3个master节点都无法使用kubelet查看节点信息: $kubectl get nodes
The connection to the server apiserver.cluster.local:6443 was refused - did you specify the right host or port?
在第1个master查看只看自己这个节点信息。
cp /etc/kubernetes/admin.conf ~/.kube/config
购买了1.18.4离线安装包,三个master安装成功了,但是work节点无法join。提示IsVirtualServerAvailable warn: virtual server is empty.
@ethanzhang911 购买了1.18.4离线安装包,三个master安装成功了,但是work节点无法join。提示IsVirtualServerAvailable warn: virtual server is empty.
/invite ethanzhang911
- 不同服务器密码不一样怎么配置呢?
- 只能用 root 用户登录吗?
用密钥 不支持非root
master和worker可以是同一虚拟机吗,也就是说单机部署,这样比较适合小型企业。
master和worker可以是同一虚拟机吗,也就是说单机部署,这样比较适合小型企业。
不装node即可,master本质也是worker
安装 1.18.5 三个master节点 其中一个节点不知道为啥 没有安装etdc 导致apiserver无法启动
我移除没有etcd的node 然后重新join 卡在了 18:26:50 [INFO] [ssh.go:50] [192.168.1.12:22] W0705 18:26:50.143474 13122 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" 18:26:50 [INFO] [ssh.go:50] [192.168.1.12:22] [control-plane] Creating static Pod manifest for "kube-controller-manager" 18:26:50 [INFO] [ssh.go:50] [192.168.1.12:22] [control-plane] Creating static Pod manifest for "kube-scheduler" 18:26:50 [INFO] [ssh.go:50] [192.168.1.12:22] [check-etcd] Checking that the etcd cluster is healthy
我移除没有etcd的node 然后重新join 卡在了 18:26:50 [INFO] [ssh.go:50] [192.168.1.12:22] W0705 18:26:50.143474 13122 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" 18:26:50 [INFO] [ssh.go:50] [192.168.1.12:22] [control-plane] Creating static Pod manifest for "kube-controller-manager" 18:26:50 [INFO] [ssh.go:50] [192.168.1.12:22] [control-plane] Creating static Pod manifest for "kube-scheduler" 18:26:50 [INFO] [ssh.go:50] [192.168.1.12:22] [check-etcd] Checking that the etcd cluster is healthy
这个issue到 github.com/fanux/sealos 下面去提一下吧,你的etcd集群已经不健康了吧
支持ceph 安装吗?
支持ceph 安装吗?
暂不支持,请用rook
离线包支持基于ARM架构的部署吗?
提现不了吗?
离线包支持基于ARM架构的部署吗?
目前没有
提现不了吗?
已转支付宝
购买完年费包,为什么下载还要钱
购买完年费包,为什么下载还要钱
你的应该好了
部署了 v1.18.8的版本,有办法修改docker storage drvier为 overlay2吗 而不用重装k8s,多谢
部署了 v1.18.8的版本,有办法修改docker storage drvier为 overlay2吗 而不用重装k8s,多谢
dockerd -s overlay2
19:50:44 [DEBG] [scp.go:27] [ssh]host: 10.0.10.56:22 , remote md5: 8d34a00130182f81e33d6e62c30cef3a 19:50:44 [INFO] [scp.go:31] [ssh]md5 validate true 19:50:44 [INFO] [download.go:38] [10.0.10.56:22]copy file md5 validate success 19:50:44 [INFO] [ssh.go:57] [ssh][10.0.10.56:22] grep -qF '10.0.10.56 apiserver.cluster.local' /etc/hosts || echo %!s(MISSING) %!s(MISSING) >> /etc/hosts 19:50:44 [INFO] [ssh.go:50] [10.0.10.56:22] bash: -c:行0: 未预期的符号 `(' 附近有语法错误 19:50:44 [INFO] [ssh.go:12] [ssh][10.0.10.56:22] kubeadm init --config=/root/kubeadm-config.yaml --upload-certs -v 0
19:54:48 [DEBG] [ssh.go:24] [ssh][10.0.10.56:22]command result is: W0828 19:50:45.531816 4092 common.go:77] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta1". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version. W0828 19:50:45.540324 4092 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.0 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileExisting-socat]: socat not found in system path [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0828 19:50:48.904455 4092 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0828 19:50:48.907655 4092 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher
19:54:48 [EROR] [ssh.go:27] [ssh][10.0.10.56:22]Error exec command failed: Process exited with status 1 19:54:48 [EROR] [init.go:134] [10.0.10.56:22]kubernetes install is error.please clean and uninstall.
购买完年费包,为什么下载还是会出现user disable这个返回信息
UserDisable
/invite 119275885
机器人死机了? 加QQ群吧
能提供下1.13版本的下载链接吗
付费离线安装包内包含DashboardUI吗?
付费离线安装包内包含DashboardUI吗?
不包含,自己装kuboard
http://store.lameleg.com