dyrnq / kubeadm-vagrant

Run kubernetes cluster with kubeadm on vagrant.
1 stars 5 forks source link

master1: [init] Using Kubernetes version: v1.21.3 #10

Open dyrnq opened 3 years ago

dyrnq commented 3 years ago
master1: [init] Using Kubernetes version: v1.21.3
master1: [preflight] Running pre-flight checks
master1: [preflight] Pulling images required for setting up a Kubernetes cluster
master1: [preflight] This might take a minute or two, depending on the speed of your internet connection
master1: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
master1: [certs] Using certificateDir folder "/etc/kubernetes/pki"
master1: [certs] Using existing ca certificate authority
master1: [certs] Generating "apiserver" certificate and key
master1: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 192.168.26.11 192.168.26.10]
master1: [certs] Generating "apiserver-kubelet-client" certificate and key
master1: [certs] Using existing front-proxy-ca certificate authority
master1: [certs] Generating "front-proxy-client" certificate and key
master1: [certs] Using existing etcd/ca certificate authority
master1: [certs] Generating "etcd/server" certificate and key
master1: [certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [192.168.26.11 127.0.0.1 ::1]
master1: [certs] Generating "etcd/peer" certificate and key
master1: [certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [192.168.26.11 127.0.0.1 ::1]
master1: [certs] Generating "etcd/healthcheck-client" certificate and key
master1: [certs] Generating "apiserver-etcd-client" certificate and key
master1: [certs] Generating "sa" key and public key
master1: [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
master1: [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
master1: [kubeconfig] Writing "admin.conf" kubeconfig file
master1: [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
master1: [kubeconfig] Writing "kubelet.conf" kubeconfig file
master1: [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
master1: [kubeconfig] Writing "controller-manager.conf" kubeconfig file
master1: [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
master1: [kubeconfig] Writing "scheduler.conf" kubeconfig file
master1: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
master1: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
master1: [kubelet-start] Starting the kubelet
master1: [control-plane] Using manifest folder "/etc/kubernetes/manifests"
master1: [control-plane] Creating static Pod manifest for "kube-apiserver"
master1: [control-plane] Creating static Pod manifest for "kube-controller-manager"
master1: [control-plane] Creating static Pod manifest for "kube-scheduler"
master1: [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
master1: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
master1: [apiclient] All control plane components are healthy after 17.568776 seconds
master1: [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
master1: [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
master1: [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
master1: [upload-certs] Using certificate key:
master1: 23b8272c642b5781c5ebe114f12299dbc57e40a58f11b0a794f5b0bef4f907cb
master1: [mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
master1: [mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
master1: [bootstrap-token] Using token: ayngk7.m1555duk5x2i3ctt
master1: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
master1: [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
master1: [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
master1: [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
master1: [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
master1: [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
master1: [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
master1: [addons] Applied essential addon: CoreDNS
master1: [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
master1: [addons] Applied essential addon: kube-proxy
master1: 
master1: Your Kubernetes control-plane has initialized successfully!
master1: 
master1: To start using your cluster, you need to run the following as a regular user:
master1: 
master1:   mkdir -p $HOME/.kube
master1:   sudo cp -i 
master1: /etc/kubernetes/admin.conf
master1:  $HOME/.kube/config
master1:   sudo chown $(id -u):$(id -g) $HOME/.kube/config
master1: 
master1: Alternatively, if you are the root user, you can run:
master1: 
master1:   export KUBECONFIG=/etc/kubernetes/admin.conf
master1: 
master1: You should now deploy a pod network to the cluster.
master1: Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
master1:   https://kubernetes.io/docs/concepts/cluster-administration/addons/
master1: You can now join any number of the control-plane node running the following command on each as root:
master1: 
master1:   
master1: kubeadm join 192.168.26.10:8443 --token ayngk7.m1555duk5x2i3ctt \
master1:    --discovery-token-ca-cert-hash sha256:238f7512ea63439e9a85def60e3076c28144a64cf2258e5f3a9352efc1c69ae3 \
master1:    --control-plane --certificate-key 23b8272c642b5781c5ebe114f12299dbc57e40a58f11b0a794f5b0bef4f907cb
master1: 
master1: 
master1: Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
master1: As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
master1: "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
master1: 
master1: Then you can join any number of worker nodes by running the following on each as root:
master1: 
master1: kubeadm join 192.168.26.10:8443 --token ayngk7.m1555duk5x2i3ctt \
master1:    --discovery-token-ca-cert-hash sha256:238f7512ea63439e9a85def60e3076c28144a64cf2258e5f3a9352efc1c69ae3 
master1:  >>>   installing flannel network addon..
master1: configmap/calico-config created
master1: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
master1: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
master1: clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
master1: clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
master1: clusterrole.rbac.authorization.k8s.io/calico-node created
master1: clusterrolebinding.rbac.authorization.k8s.io/calico-node created
master1: daemonset.apps/calico-node created
master1: serviceaccount/calico-node created
master1: deployment.apps/calico-kube-controllers created
master1: serviceaccount/calico-kube-controllers created
master1: Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
master1: poddisruptionbudget.policy/calico-kube-controllers created