koderover / zadig

Zadig is a cloud native, distributed, developer-oriented DevOps platform
https://koderover.com
Other
2.75k stars 818 forks source link

[Question] All in One和 基于现有 Kubernetes 安装 , 都安装失败 #1413

Closed whtiehack closed 2 years ago

whtiehack commented 2 years ago

General Question

环境

https://labs.play-with-k8s.com/

All in One

[node4 ~]$ curl -SsL https://github.com/koderover/zadig/releases/latest/download/all_in_one_install_quickstart.sh | bash
 _____          _ _       
/ _  / __ _  __| (_) __ _ 
\// / / _. |/ _. | |/ _. |
 / //\ (_| | (_| | | (_| |
/____/\__,_|\__,_|_|\__, |
                    |___/
 Welcome to the Koderover Installer
[INFO] Checking system for requirements...
bash: line 198: dig: command not found
cat: /etc/fstab: No such file or directory
cat: /etc/sysctl.conf: No such file or directory
Failed to -q is-active.service: Unit is-active.service not found.
 preflight check completed in 1 seconds
 install preparation completed in 1 seconds

* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
sysctl: permission denied on key 'kernel.yama.ptrace_scope'
* Applying /usr/lib/sysctl.d/50-default.conf ...
sysctl: permission denied on key 'kernel.sysrq'
sysctl: permission denied on key 'kernel.core_uses_pid'
sysctl: permission denied on key 'kernel.kptr_restrict'
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
sysctl: permission denied on key 'fs.protected_hardlinks'
sysctl: permission denied on key 'fs.protected_symlinks'
* Applying /etc/sysctl.d/k8s.conf ...
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
net.ipv4.conf.all.forwarding = 1
* Applying /etc/sysctl.conf ...
sysctl: cannot open "/etc/sysctl.conf": No such file or directory
[INFO] Downloading kubernetes binary.....
######################################################################## 100.0%
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
sysctl: permission denied on key 'kernel.yama.ptrace_scope'
* Applying /usr/lib/sysctl.d/50-default.conf ...
sysctl: permission denied on key 'kernel.sysrq'
sysctl: permission denied on key 'kernel.core_uses_pid'
sysctl: permission denied on key 'kernel.kptr_restrict'
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
sysctl: permission denied on key 'fs.protected_hardlinks'
sysctl: permission denied on key 'fs.protected_symlinks'
* Applying /etc/sysctl.d/k8s.conf ...
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
net.ipv4.conf.all.forwarding = 1
* Applying /etc/sysctl.conf ...
sysctl: cannot open "/etc/sysctl.conf": No such file or directory
[INFO] Installing kubelet, kubeadm, kubectl and cni host packages
 Kubernetes host packages already installed
79d541cda6cb: Loading layer [==================================================>]  3.041MB/3.041MB
c12e92a17b61: Loading layer [==================================================>]  1.734MB/1.734MB
f9b944e24088: Loading layer [==================================================>]  107.3MB/107.3MB
Loaded image: k8s.gcr.io/kube-controller-manager:v1.19.3
91e3a07063b3: Loading layer [==================================================>]  53.89MB/53.89MB
b4e54f331697: Loading layer [==================================================>]  21.78MB/21.78MB
b9b82a97c787: Loading layer [==================================================>]  5.168MB/5.168MB
1b55846906e8: Loading layer [==================================================>]  4.608kB/4.608kB
061bfb5cb861: Loading layer [==================================================>]  8.192kB/8.192kB
78dd6c0504a7: Loading layer [==================================================>]  8.704kB/8.704kB
f1b0b899d419: Loading layer [==================================================>]  38.81MB/38.81MB
Loaded image: k8s.gcr.io/kube-proxy:v1.19.3
ba0dae6243cc: Loading layer [==================================================>]  684.5kB/684.5kB
Loaded image: k8s.gcr.io/pause:3.2
225df95e717c: Loading layer [==================================================>]  336.4kB/336.4kB
96d17b0b58a7: Loading layer [==================================================>]  45.02MB/45.02MB
Loaded image: k8s.gcr.io/coredns:1.7.0
94bd98e8a8a9: Loading layer [==================================================>]  42.13MB/42.13MB
Loaded image: k8s.gcr.io/kube-scheduler:v1.19.3
4b5c08158bcb: Loading layer [==================================================>]  115.3MB/115.3MB
Loaded image: k8s.gcr.io/kube-apiserver:v1.19.3
d72a74c56330: Loading layer [==================================================>]  3.031MB/3.031MB
d61c79b29299: Loading layer [==================================================>]   2.13MB/2.13MB
1a4e46412eb0: Loading layer [==================================================>]  225.3MB/225.3MB
bfa5849f3d09: Loading layer [==================================================>]   2.19MB/2.19MB
bb63b9467928: Loading layer [==================================================>]  21.98MB/21.98MB
Loaded image: k8s.gcr.io/etcd:3.4.13-0

Initializing machine ID from random generator.
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not active, please run 'systemctl start docker.service'
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-210-generic
DOCKER_VERSION: 20.10.1
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1
        [WARNING KubeletVersion]: the kubelet version is higher than the control plane version. This is not a supported version skew and may lead to a malfunctional cluster. Kubelet version: "1.20.1" Control plane version: "1.19.3"
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node4] and IPs [10.96.0.1 172.18.0.30]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node4] and IPs [172.18.0.30 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node4] and IPs [172.18.0.30 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.003357 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node4 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node node4 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ma2dqw.1x2heju7ea3tmwqp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.18.0.30:6443 --token ma2dqw.1x2heju7ea3tmwqp \
    --discovery-token-ca-cert-hash sha256:1ee3c863a76600a9af662651b13d854f1cd62e767f12cc48c03df85a5706025d 
Waiting for api server to startup
Warning: resource daemonsets/kube-proxy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
daemonset.apps/kube-proxy configured
No resources found
bash: line 530: sudo: command not found
bash: line 531: sudo: command not found
node/node4 untainted
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
 infrastructure installation completed in 3 minute(s) and 57 second(s)
bash: line 758: openssl: command not found
NO ENCRYPTION KEY PROVIDED, ZADIG HAS GENERATED AN ENCRYPTION KEY

THIS KEY WILL BE USED FOR POSSIBLE FUTURE REINSTALLATION, PLEASE SAVE THIS KEY CAREFULLY
installing helm client...
succeed to install helm client: version.BuildInfo{Version:"v3.6.1", GitCommit:"61d8e8c4a6f95540c15c6a65f36a6dd0a45e7a2f", GitTreeState:"clean", GoVersion:"go1.16.5"}
installing zadig ...
"koderover-chart" already exists with the same configuration, skipping
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "koderover-chart" chart repository
Update Complete. ⎈Happy Helming!⎈
Release "zadig-zadig" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: unknown object type "nil" in ConfigMap.data.POETRY_API_ROOT_KEY
 *****************************************
 *    Koderover installer exit report    *
 *****************************************
 ✔ ROOT PRIVILEGE CHECK SUCCESS
 ✔ SYSTEM CHECK SUCCESS
 ✔ DEPENDENCY INSTALLATION SUCCESS
 ✔ KUBERNETES CLUSTER INITIALIZATION SUCCESS
[ERROR] ⚙ ZADIG INSTALLATION FAILED
 *****************************************
 *            END OF REPORT              *
 *****************************************
[node4 ~]$ kubectl -n zadig get po
No resources found in zadig namespace.
[node4 ~]$ kubectl get pod
No resources found in default namespace.
[node4 ~]$ kubectl get node
NAME    STATUS   ROLES                  AGE    VERSION
node4   Ready    control-plane,master   106s   v1.20.1
[node4 ~]$ 

基于现有 Kubernetes 安装

[node4 ~]$ 
[node4 ~]$ # 快速体验:
[node4 ~]$ curl -LO https://github.com/koderover/zadig/releases/download/v1.11.0/install_quickstart.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   660  100   660    0     0   1773      0 --:--:-- --:--:-- --:--:--  1778
100 12030  100 12030    0     0  24275      0 --:--:-- --:--:-- --:--:-- 24275
[node4 ~]$ chmod +x ./install_quickstart.sh
[node4 ~]$ ./install_quickstart.sh 
Either IP+PORT or DOMAIN shoule be provided
[node4 ~]$ export DOMAIN=ip172-18-0-30-c9jl68lrie6000d7skn0.direct.labs.play-with-k8s.com
[node4 ~]$ ./install_quickstart.sh 
./install_quickstart.sh: line 178: openssl: command not found
NO ENCRYPTION KEY PROVIDED, ZADIG HAS GENERATED AN ENCRYPTION KEY

THIS KEY WILL BE USED FOR POSSIBLE FUTURE REINSTALLATION, PLEASE SAVE THIS KEY CAREFULLY
installing helm client...
succeed to install helm client: version.BuildInfo{Version:"v3.6.1", GitCommit:"61d8e8c4a6f95540c15c6a65f36a6dd0a45e7a2f", GitTreeState:"clean", GoVersion:"go1.16.5"}
installing zadig ...
"koderover-chart" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "koderover-chart" chart repository
Update Complete. ⎈Happy Helming!⎈
Release "zadig-zadig" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: unknown object type "nil" in ConfigMap.data.POETRY_API_ROOT_KEY
[node4 ~]$ 
tonyjia87 commented 2 years ago

我也同样的问题,已经block 体验了

whtiehack commented 2 years ago

应该是 提供的环境有问题 所以我直接关掉了