Sealos is a production-ready Kubernetes distribution. You can run any Docker image on sealos, start high availability databases like mysql/pgsql/redis/mongo, develop applications using any Programming language.
BUG: error when install k8s on centos7.9 error: resource mapping not found for name: "tigera-operator" namespace: "" from "manifests/tigera-operator.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1" #1775
sealos run labring/kubernetes:v1.25.0 labring/calico:v3.22.1 --single
The Description of the bug
when install, report error: resource mapping not found for name: "tigera-operator" namespace: "" from "manifests/tigera-operator.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
How to reproduce(pictures can be attached if necessary)
command:
sealos run labring/kubernetes:v1.25.0 labring/calico:v3.22.1 --single
log:
2022-09-21T08:43:08 info Start to create a new cluster: master [10.0.2.15], worker []
2022-09-21T08:43:08 info Executing pipeline Check in CreateProcessor.
2022-09-21T08:43:08 info checker:hostname [10.0.2.15:22]
2022-09-21T08:43:08 info checker:timeSync [10.0.2.15:22]
2022-09-21T08:43:08 info Executing pipeline PreProcess in CreateProcessor.
60d759ef12b047369834a3f89757a41699a27dc92b785772e6b64d03d8f38d5b
29516dc98b4b2d4fc899c9b27dfd004e75ee73ef9071ad6835f9ff97f156f58c
default-1wsi4pnp
default-n4wu3sse
2022-09-21T08:43:08 info Executing pipeline RunConfig in CreateProcessor.
2022-09-21T08:43:08 info Executing pipeline MountRootfs in CreateProcessor.
which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)
INFO [2022-09-21 08:43:53] >> check root,port,cri success
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
INFO [2022-09-21 08:43:59] >> Health check containerd!
INFO [2022-09-21 08:44:00] >> containerd is running
INFO [2022-09-21 08:44:00] >> init containerd success
Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service.
INFO [2022-09-21 08:44:00] >> Health check image-cri-shim!
INFO [2022-09-21 08:44:00] >> image-cri-shim is running
INFO [2022-09-21 08:44:00] >> init shim success
Applying /etc/sysctl.conf ...
net.ipv4.ip_forward = 1
INFO [2022-09-21 08:44:05] >> init kube success
INFO [2022-09-21 08:44:05] >> init containerd rootfs success
2022-09-21T08:44:24 info Executing pipeline Init in CreateProcessor.
2022-09-21T08:44:24 info start to copy kubeadm config to master0
2022-09-21T08:44:26 info start to generate cert and kubeConfig...
2022-09-21T08:44:26 info start to generator cert and copy to masters...
2022-09-21T08:44:26 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost xiaochao:xiaochao] map[10.0.2.15:10.0.2.15 10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1]}
2022-09-21T08:44:26 info Etcd altnames : {map[localhost:localhost xiaochao:xiaochao] map[10.0.2.15:10.0.2.15 127.0.0.1:127.0.0.1 ::1:::1]}, commonName : xiaochao
2022-09-21T08:44:29 info start to copy etc pki files to masters
2022-09-21T08:44:29 info start to create kubeconfig...
2022-09-21T08:44:30 info start to copy kubeconfig files to masters
2022-09-21T08:44:30 info start to copy static files to masters
2022-09-21T08:44:30 info start to apply registry
Created symlink from /etc/systemd/system/multi-user.target.wants/registry.service to /etc/systemd/system/registry.service.
INFO [2022-09-21 08:44:32] >> Health check registry!
INFO [2022-09-21 08:44:32] >> registry is running
INFO [2022-09-21 08:44:32] >> init registry success
2022-09-21T08:44:32 info start to init master0...
2022-09-21T08:44:32 info registry auth in node 10.0.2.15:22
2022-09-21T08:44:32 info domain sealos.hub:10.0.2.15 append success
2022-09-21T08:44:33 info domain apiserver.cluster.local:10.0.2.15 append success
W0921 08:44:33.933783 3536 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
W0921 08:45:08.383317 3536 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://10.0.2.15:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
W0921 08:45:08.615114 3536 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://10.0.2.15:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.504834 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node xiaochao as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node xiaochao as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join apiserver.cluster.local:6443 --token \
--discovery-token-ca-cert-hash sha256:9ce671613b4c7583a69ea40e849c9a4d492b597ee724b95b902247587c80670d
2022-09-21T08:45:28 info Executing pipeline Join in CreateProcessor.
2022-09-21T08:45:28 info start to get kubernetes token...
2022-09-21T08:45:31 info Executing pipeline RunGuest in CreateProcessor.
2022-09-21T08:45:31 info guest cmd is kubectl apply -f manifests/tigera-operator.yaml
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
namespace/tigera-operator created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
error: resource mapping not found for name: "tigera-operator" namespace: "" from "manifests/tigera-operator.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first
2022-09-21T08:45:35 error Applied to cluster error: exit status 1
2022-09-21T08:45:35 info
Sealos version: sealos version
{"gitVersion":"4.1.3","gitCommit":"b2ba9705","buildDate":"2022-09-06T06:04:14Z","goVersion":"go1.19","compiler":"gc","platform":"linux/amd64"}
Which command or component
sealos run labring/kubernetes:v1.25.0 labring/calico:v3.22.1 --single
The Description of the bug
when install, report error: resource mapping not found for name: "tigera-operator" namespace: "" from "manifests/tigera-operator.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
How to reproduce(pictures can be attached if necessary)
command:
sealos run labring/kubernetes:v1.25.0 labring/calico:v3.22.1 --single log:
2022-09-21T08:43:08 info Start to create a new cluster: master [10.0.2.15], worker [] 2022-09-21T08:43:08 info Executing pipeline Check in CreateProcessor. 2022-09-21T08:43:08 info checker:hostname [10.0.2.15:22] 2022-09-21T08:43:08 info checker:timeSync [10.0.2.15:22] 2022-09-21T08:43:08 info Executing pipeline PreProcess in CreateProcessor. 60d759ef12b047369834a3f89757a41699a27dc92b785772e6b64d03d8f38d5b 29516dc98b4b2d4fc899c9b27dfd004e75ee73ef9071ad6835f9ff97f156f58c default-1wsi4pnp default-n4wu3sse 2022-09-21T08:43:08 info Executing pipeline RunConfig in CreateProcessor. 2022-09-21T08:43:08 info Executing pipeline MountRootfs in CreateProcessor. which: no docker in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin) INFO [2022-09-21 08:43:53] >> check root,port,cri success Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service. INFO [2022-09-21 08:43:59] >> Health check containerd! INFO [2022-09-21 08:44:00] >> containerd is running INFO [2022-09-21 08:44:00] >> init containerd success Created symlink from /etc/systemd/system/multi-user.target.wants/image-cri-shim.service to /etc/systemd/system/image-cri-shim.service. INFO [2022-09-21 08:44:00] >> Health check image-cri-shim! INFO [2022-09-21 08:44:00] >> image-cri-shim is running INFO [2022-09-21 08:44:00] >> init shim success
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root:
kubeadm join apiserver.cluster.local:6443 --token \
--discovery-token-ca-cert-hash sha256:9ce671613b4c7583a69ea40e849c9a4d492b597ee724b95b902247587c80670d \
--control-plane --certificate-key
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join apiserver.cluster.local:6443 --token \
--discovery-token-ca-cert-hash sha256:9ce671613b4c7583a69ea40e849c9a4d492b597ee724b95b902247587c80670d
2022-09-21T08:45:28 info Executing pipeline Join in CreateProcessor.
2022-09-21T08:45:28 info start to get kubernetes token...
2022-09-21T08:45:31 info Executing pipeline RunGuest in CreateProcessor.
2022-09-21T08:45:31 info guest cmd is kubectl apply -f manifests/tigera-operator.yaml
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
namespace/tigera-operator created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
error: resource mapping not found for name: "tigera-operator" namespace: "" from "manifests/tigera-operator.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first
2022-09-21T08:45:35 error Applied to cluster error: exit status 1
2022-09-21T08:45:35 info
/:/\ \ \ /:/\:\ \ /:/\:\ \ /:/ / /:/\:\ \ /:/\ \ \ \:\~\ \ \ /::\~\:\ \ /::\~\:\ \ /:/ / /:/ \:\ \ \:\~\ \ \ /\ \:\ \ _\ /:/\:\ \:_\ /:/\:\ \:_\ /:// /:// \:_\ /\ \:\ \ _\ \:\ \:\ \// \:\~\:\ \// \/\:\/:/ / \:\ \ \:\ \ /:/ / \:\ \:\ \// \:\ \:_\ \:\ \:_\ \::/ / \:\ \ \:\ /:/ / \:\ \:_\ \:\/:/ / \:\ \// /:/ / \:\ \ \:\/:/ / \:\/:/ / \::/ / \:\\ /:/ / \:__\ \::/ / \::/ / \// \// \// \// \// \//
What you expected to happen
install success
Operating environment
Docker version: none
Kubernetes version: 1.25.0
Sealos version: sealos version {"gitVersion":"4.1.3","gitCommit":"b2ba9705","buildDate":"2022-09-06T06:04:14Z","goVersion":"go1.19","compiler":"gc","platform":"linux/amd64"}
Operating system: centos7
Runtime environment: virtual machine (virbox, 2G memory, 2 core cpu, 100G storage)
Cluster size: single
Additional information: none