Closed ghost closed 1 year ago
Installation exception
22.04
all-in-one
The deployment is successful and cannot be accessed, there is no port.
... TASK [ks-core/ks-core : KubeSphere | Upgrade CRDs] ***************************** changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_federatedusers.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_federatedroles.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/storage.kubesphere.io_storageclasseraccessor.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/gateway.kubesphere.io_nginxes.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/servicemesh.kubesphere.io_servicepolicies.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/cluster.kubesphere.io_clusters.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/application.kubesphere.io_helmreleases.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_groups.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/network.kubesphere.io_ippools.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/tenant.kubesphere.io_workspaces.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/application.kubesphere.io_helmapplicationversions.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_loginrecords.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/gateway.kubesphere.io_gateways.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/application.kubesphere.io_helmrepos.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/quota.kubesphere.io_resourcequotas.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/app_v1beta1_application.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_globalroles.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_users.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_federatedrolebindings.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/network.kubesphere.io_ipamblocks.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/application.kubesphere.io_helmapplications.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/tenant.kubesphere.io_workspacetemplates.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/network.kubesphere.io_ipamhandles.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_groupbindings.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/servicemesh.kubesphere.io_strategies.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/application.kubesphere.io_helmcategories.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_globalrolebindings.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/network.kubesphere.io_namespacenetworkpolicies.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_workspaceroles.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_rolebases.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_workspacerolebindings.yaml) TASK [ks-core/ks-core : KubeSphere | Creating ks-core] ************************* changed: [localhost] TASK [ks-core/ks-core : KubeSphere | Creating manifests] *********************** changed: [localhost] => (item={'path': 'ks-core', 'file': 'ks-upgrade.yaml'}) TASK [ks-core/ks-core : Kubesphere | Checking Users Manger and Workspaces Manger] *** changed: [localhost] TASK [ks-core/ks-core : Kubesphere | Checking migration job] ******************* skipping: [localhost] TASK [ks-core/ks-core : KubeSphere | Creating migration job] ******************* skipping: [localhost] TASK [ks-core/ks-core : KubeSphere | Importing ks-core status] ***************** changed: [localhost] TASK [ks-core/ks-core : KubeSphere | Checking RoleTemplate Alerting Management] *** changed: [localhost] TASK [ks-core/ks-core : KubeSphere | Deleteing RoleTemplate Alerting Management] *** skipping: [localhost] TASK [ks-core/prepare : KubeSphere | Checking core components (1)] ************* changed: [localhost] TASK [ks-core/prepare : KubeSphere | Checking core components (2)] ************* changed: [localhost] TASK [ks-core/prepare : KubeSphere | Checking core components (3)] ************* skipping: [localhost] TASK [ks-core/prepare : KubeSphere | Checking core components (4)] ************* skipping: [localhost] TASK [ks-core/prepare : KubeSphere | Updating ks-core status] ****************** skipping: [localhost] TASK [ks-core/prepare : set_fact] ********************************************** skipping: [localhost] TASK [ks-core/prepare : KubeSphere | Creating KubeSphere directory] ************ ok: [localhost] TASK [ks-core/prepare : KubeSphere | Getting installation init files] ********** changed: [localhost] => (item=ks-init) TASK [ks-core/prepare : KubeSphere | Initing KubeSphere] *********************** changed: [localhost] => (item=role-templates.yaml) TASK [ks-core/prepare : KubeSphere | Generating kubeconfig-admin] ************** skipping: [localhost] PLAY RECAP ********************************************************************* localhost : ok=31 changed=22 unreachable=0 failed=0 skipped=16 rescued=0 ignored=0 Start installing monitoring Start installing multicluster Start installing openpitrix Start installing network ************************************************** Waiting for all tasks to be completed ... task network status is successful (1/4) task openpitrix status is successful (2/4) task multicluster status is successful (3/4) task monitoring status is successful (4/4) ************************************************** Collecting installation results ... ##################################################### ### Welcome to KubeSphere! ### ##################################################### Console: http://192.168.3.120:30880 Account: admin Password: P@88w0rd NOTES: 1. After you log into the console, please check the monitoring status of service components in "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login. ##################################################### https://kubesphere.io 2023-03-06 21:01:07 ##################################################### ^C root@xps:/home/bp/Documents/k8s# ./kk version kk version: &version.Info{Major:"3", Minor:"0", GitVersion:"v3.0.7", GitCommit:"e755baf67198d565689d7207378174f429b508ba", GitTreeState:"clean", BuildDate:"2023-01-18T01:57:24Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"} root@xps:/home/bp/Documents/k8s# root@xps:/home/bp/Documents/k8s# root@xps:/home/bp/Documents/k8s# uname -r 5.19.0-35-generic root@xps:/home/bp/Documents/k8s# cat /etc/issue Ubuntu 22.04.2 LTS \n \l root@xps:/home/bp/Documents/k8s# ss -lnt| grep 30880 root@xps:/home/bp/Documents/k8s# kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-676c86494f-jt49t 1/1 Running 0 8m22s kube-system calico-node-m7pv5 1/1 Running 0 8m22s kube-system coredns-757cd945b-5lkjt 1/1 Running 0 8m22s kube-system coredns-757cd945b-9wg69 1/1 Running 0 8m22s kube-system kube-apiserver-xps 1/1 Running 0 8m39s kube-system kube-controller-manager-xps 1/1 Running 0 8m37s kube-system kube-proxy-clkj7 1/1 Running 0 8m22s kube-system kube-scheduler-xps 1/1 Running 0 8m39s kube-system nodelocaldns-4vscv 1/1 Running 0 8m22s kube-system openebs-localpv-provisioner-7974b86588-9j5dm 1/1 Running 0 8m22s kube-system snapshot-controller-0 1/1 Running 0 7m38s kubesphere-controls-system default-http-backend-659cc67b6b-5p6fn 1/1 Running 0 7m3s kubesphere-controls-system kubectl-admin-7966644f4b-2bz4x 1/1 Running 0 5m50s kubesphere-monitoring-system alertmanager-main-0 2/2 Running 0 6m20s kubesphere-monitoring-system kube-state-metrics-69f4fbb5d6-2g7k6 3/3 Running 0 6m22s kubesphere-monitoring-system node-exporter-ltfg9 2/2 Running 0 6m22s kubesphere-monitoring-system notification-manager-deployment-cdd656fd-zdbr5 2/2 Running 0 6m5s kubesphere-monitoring-system notification-manager-operator-7f7c564948-4zm7q 2/2 Running 0 6m11s kubesphere-monitoring-system prometheus-k8s-0 2/2 Running 0 6m21s kubesphere-monitoring-system prometheus-operator-684988fc5c-8tkj8 2/2 Running 0 6m24s kubesphere-system ks-apiserver-78bc58d684-w5brn 1/1 Running 0 7m3s kubesphere-system ks-console-799f77fb7d-cjf9g 1/1 Running 0 7m3s kubesphere-system ks-controller-manager-8569fb495c-5q4xj 1/1 Running 0 7m3s kubesphere-system ks-installer-86ddb55c5b-2cgsm 1/1 Running 0 8m22s root@xps:/home/bp/Documents/k8s# kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 8m44s kube-system coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 8m41s kube-system kube-controller-manager-svc ClusterIP None <none> 10257/TCP 6m26s kube-system kube-scheduler-svc ClusterIP None <none> 10259/TCP 6m26s kube-system kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 6m27s kubesphere-controls-system default-http-backend ClusterIP 10.233.20.88 <none> 80/TCP 7m8s kubesphere-monitoring-system alertmanager-main ClusterIP 10.233.1.61 <none> 9093/TCP,8080/TCP 6m25s kubesphere-monitoring-system alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 6m25s kubesphere-monitoring-system kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 6m27s kubesphere-monitoring-system node-exporter ClusterIP None <none> 9100/TCP 6m27s kubesphere-monitoring-system notification-manager-controller-metrics ClusterIP 10.233.63.229 <none> 8443/TCP 6m16s kubesphere-monitoring-system notification-manager-svc ClusterIP 10.233.40.66 <none> 19093/TCP 6m10s kubesphere-monitoring-system notification-manager-webhook ClusterIP 10.233.58.199 <none> 443/TCP 6m16s kubesphere-monitoring-system prometheus-k8s ClusterIP 10.233.58.27 <none> 9090/TCP,8080/TCP 6m26s kubesphere-monitoring-system prometheus-operated ClusterIP None <none> 9090/TCP 6m26s kubesphere-monitoring-system prometheus-operator ClusterIP None <none> 8443/TCP 6m31s kubesphere-system ks-apiserver ClusterIP 10.233.21.100 <none> 80/TCP 7m8s kubesphere-system ks-console NodePort 10.233.18.31 <none> 80:30880/TCP 7m8s kubesphere-system ks-controller-manager ClusterIP 10.233.34.105 <none> 443/TCP 7m8s root@xps:/home/bp/Documents/k8s#
No response
The new version of kubernetes no longer listen to NodePorts. https://github.com/kubernetes/kubernetes/pull/108496
But you should be able to check it with ipvsadm ipvsadm | grep 30880
ipvsadm
ipvsadm | grep 30880
thanks.
What is version of KubeKey has the issue?
Installation exception
What is your os environment?
22.04
KubeKey config file
A clear and concise description of what happend.
The deployment is successful and cannot be accessed, there is no port.
Relevant log output
Additional information
No response