Sealos is a production-ready Kubernetes distribution. You can run any Docker image on sealos, start high availability databases like mysql/pgsql/redis/mongo, develop applications using any Programming language.
connect with an active community on Kubernetes slack #openebs channel.
Release "reflector" does not exist. Installing it now.
NAME: reflector
LAST DEPLOYED: Tue Aug 1 07:48:00 2023
NAMESPACE: reflector-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Reflector can now be used to perform automatic copy actions on secrets and configmaps.
Release "zot" does not exist. Installing it now.
Error: failed pre-install: 1 error occurred:
timed out waiting for the condition
2023-08-01T07:53:02 error Applied to cluster error: exit status 1
Error: exit status 1
What is the expected behavior?
1
What do you see instead?
[root@vultr ~]# sealos gen labring/kubernetes:v1.25.6\
Applying /etc/sysctl.conf ...
Image is up to date for sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
INFO [2023-08-01 08:09:27] >> init kubelet success
INFO [2023-08-01 08:09:27] >> init rootfs success
2023-08-01T08:09:27 info Executing pipeline Init in CreateProcessor.
2023-08-01T08:09:27 info start to copy kubeadm config to master0
2023-08-01T08:09:27 info start to generate cert and kubeConfig...
2023-08-01T08:09:27 info start to generator cert and copy to masters...
2023-08-01T08:09:27 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost vultr.guest:vultr.guest] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 216.238.107.201:216.238.107.201]}
2023-08-01T08:09:27 info Etcd altnames : {map[localhost:localhost vultr.guest:vultr.guest] map[127.0.0.1:127.0.0.1 216.238.107.201:216.238.107.201 ::1:::1]}, commonName : vultr.guest
2023-08-01T08:09:29 info start to copy etc pki files to masters
2023-08-01T08:09:29 info start to copy etc pki files to masters
2023-08-01T08:09:29 info start to create kubeconfig...
2023-08-01T08:09:30 info start to copy kubeconfig files to masters
2023-08-01T08:09:30 info start to copy static files to masters
2023-08-01T08:09:30 info start to init master0...
2023-08-01T08:09:30 info domain apiserver.cluster.local:216.238.107.201 append success
W0801 08:09:30.398071 28671 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
W0801 08:09:30.398167 28671 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
[init] Using Kubernetes version: v1.25.6
[preflight] Running pre-flight checks
[WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
W0801 08:09:43.678367 28671 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://216.238.107.201:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
W0801 08:09:43.945906 28671 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://216.238.107.201:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.506237 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node vultr.guest as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join apiserver.cluster.local:6443 --token \
--discovery-token-ca-cert-hash sha256:e914b8a1ffd9b97e746b4879a6f7c18ddb6f6684b3e597a32e3151b38dda7fc0
2023-08-01T08:09:52 info Executing pipeline Join in CreateProcessor.
2023-08-01T08:09:52 info start to get kubernetes token...
2023-08-01T08:09:52 info fetch certSANs from kubeadm configmap
2023-08-01T08:09:52 info Executing pipeline RunGuest in CreateProcessor.
2023-08-01T08:09:52 info transfers files success
Release "calico" does not exist. Installing it now.
NAME: calico
LAST DEPLOYED: Tue Aug 1 08:09:55 2023
NAMESPACE: tigera-operator
STATUS: deployed
REVISION: 1
TEST SUITE: None
namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
configmap/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
Release "openebs" does not exist. Installing it now.
NAME: openebs
LAST DEPLOYED: Tue Aug 1 08:09:58 2023
NAMESPACE: openebs
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Successfully installed OpenEBS.
Check the status by running: kubectl get pods -n openebs
The default values will install NDM and enable OpenEBS hostpath and device
storage engines along with their default StorageClasses. Use kubectl get sc
to see the list of installed OpenEBS StorageClasses.
Note: If you are upgrading from the older helm chart that was using cStor
and Jiva (non-csi) volumes, you will have to run the following command to include
the older provisioners:
For other engines, you will need to perform a few more additional steps to
enable the engine, configure the engines (e.g. creating pools) and create
StorageClasses.
For example, cStor can be enabled using commands like:
connect with an active community on Kubernetes slack #openebs channel.
Release "reflector" does not exist. Installing it now.
NAME: reflector
LAST DEPLOYED: Tue Aug 1 08:09:59 2023
NAMESPACE: reflector-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Reflector can now be used to perform automatic copy actions on secrets and configmaps.
Release "zot" does not exist. Installing it now.
Error: failed pre-install: 1 error occurred:
timed out waiting for the condition
2023-08-01T08:15:01 error Applied to cluster error: exit status 1
Error: exit status 1
Sealos Version
sealos.x86_64 0:4.3.0-1
How to reproduce the bug?
centos7 systemctl stop firewalld systemctl disable firewalld
cat > /etc/yum.repos.d/labring.repo << EOF [fury] name=labring Yum Repo baseurl=https://yum.fury.io/labring/ enabled=1 gpgcheck=0 EOF
yum clean all yum makecache all
yum install sealos -y ./init.sh (https://github.com/labring/sealos/blob/main/deploy/cloud/init.sh)
sealos gen labring/kubernetes:v1.25.6\ labring/helm:v3.12.0\ labring/calico:v3.24.1\ labring/cert-manager:v1.8.0\ labring/openebs:v3.4.0\ labring/kubernetes-reflector:v7.0.151\ labring/zot:v1.4.3\ labring/kubeblocks:v0.5.3\ --env policy=anonymousPolicy\ --masters 45.77.121.44 > Clusterfile 报错: For more information,
2023-08-01T07:53:02 error Applied to cluster error: exit status 1 Error: exit status 1
What is the expected behavior?
1
What do you see instead?
[root@vultr ~]# sealos gen labring/kubernetes:v1.25.6\
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root:
kubeadm join apiserver.cluster.local:6443 --token \
--discovery-token-ca-cert-hash sha256:e914b8a1ffd9b97e746b4879a6f7c18ddb6f6684b3e597a32e3151b38dda7fc0 \
--control-plane --certificate-key
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join apiserver.cluster.local:6443 --token \
--discovery-token-ca-cert-hash sha256:e914b8a1ffd9b97e746b4879a6f7c18ddb6f6684b3e597a32e3151b38dda7fc0
2023-08-01T08:09:52 info Executing pipeline Join in CreateProcessor.
2023-08-01T08:09:52 info start to get kubernetes token...
2023-08-01T08:09:52 info fetch certSANs from kubeadm configmap
2023-08-01T08:09:52 info Executing pipeline RunGuest in CreateProcessor.
2023-08-01T08:09:52 info transfers files success
Release "calico" does not exist. Installing it now.
NAME: calico
LAST DEPLOYED: Tue Aug 1 08:09:55 2023
NAMESPACE: tigera-operator
STATUS: deployed
REVISION: 1
TEST SUITE: None
namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
configmap/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
Release "openebs" does not exist. Installing it now.
NAME: openebs
LAST DEPLOYED: Tue Aug 1 08:09:58 2023
NAMESPACE: openebs
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Successfully installed OpenEBS.
Check the status by running: kubectl get pods -n openebs
The default values will install NDM and enable OpenEBS hostpath and device storage engines along with their default StorageClasses. Use
kubectl get sc
to see the list of installed OpenEBS StorageClasses.Note: If you are upgrading from the older helm chart that was using cStor and Jiva (non-csi) volumes, you will have to run the following command to include the older provisioners:
helm upgrade openebs openebs/openebs \ --namespace openebs \ --set legacy.enabled=true \ --reuse-values
For other engines, you will need to perform a few more additional steps to enable the engine, configure the engines (e.g. creating pools) and create StorageClasses.
For example, cStor can be enabled using commands like:
helm upgrade openebs openebs/openebs \ --namespace openebs \ --set cstor.enabled=true \ --reuse-values
For more information,
2023-08-01T08:15:01 error Applied to cluster error: exit status 1 Error: exit status 1
Operating environment
Additional information
1