kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.39k stars 555 forks source link

support generate default offline manifest yaml with Fixed kubernetes version #1295

Open willzhang opened 2 years ago

willzhang commented 2 years ago

Your current KubeKey version

2.1.0

Describe this feature

refer to: https://github.com/kubesphere/kubekey/blob/master/docs/manifest_and_artifact.md

The Manifest is an offline installation package configuration file. There are currently two ways to generate this file:

the problems

Describe the solution you'd like

not need --kubeconfig config

Kubernetes version determines all other necessary components and support versions range for kubekey

./kk create manifest default --kubernetes-version=v1.24.0 

Additional information

No response

24sama commented 2 years ago

Hi @willzhang Thanks for your feedback! I have a similar idea to yours in my mind!

And about the method 2 to generate manifest, here's how we thought about it before:

Before the users install an offline k8s cluster, they usually install a normal, online k8s cluster which includes their applications just like k8s + kubesphere platform. They need an environment to verify that the installation and solution are correct. After that, they can use kk to connect this cluster, scan the cluster info, and generate a manifest about this cluster. Maybe that looks like, docker commit + docker save. Although kk only scans the basic cluster info at present, we want to make the method 2 can get all of the cluster info in the future maybe by using velero.

BTW, about this feature-request issue, I think it's also needed. Below is another same issue: https://github.com/kubesphere/kubekey/issues/1069

willzhang commented 2 years ago

Reference: https://github.com/kubesphere/kubekey/blob/master/docs/manifest-example.md

if i want use kubekey v2.2.2 install kubernetes v1.24.3 and kubesphere v3.3.0 , what all the follow version should be? can i use any what i want ?

eg:

can i get a version list of componet and images from some where, not manifest-example.

here :https://github.com/kubesphere/ks-installer/releases/download/v3.3.1-rc.0/images-list.txt have a images-list.txt but it maybe not suiteble for kubekey v2.2.2 && kubernetes v1.24.3 && kubesphere v3.3.0.

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches: 
  - amd64
  operatingSystems: 
  - arch: amd64
    type: linux
    id: ubuntu
    version: "20.04"
    osImage: Ubuntu 20.04.3 LTS
    repository: 
      iso:
        localPath: 
        url: https://github.com/kubesphere/kubekey/releases/download/v2.0.0/ubuntu-20.04-amd64-debs.iso
  - arch: amd64
    type: linux
    id: centos
    version: "7"
    osImage: CentOS Linux 7 (Core)
    repository:
      iso:
        localPath:
        url: https://github.com/kubesphere/kubekey/releases/download/v2.0.0/centos-7-amd64-rpms.iso
  kubernetesDistributions: 
  - type: kubernetes
    version: v1.21.5
  components: 
    helm:
      version: v3.6.3
    cni:
      version: v0.9.1
    etcd:
      version: v3.4.13
    containerRuntimes:
    - type: docker
      version: 20.10.8
    crictl:
      version: v1.22.0
    docker-registry:
      version: "2"
    harbor:
      version: v2.4.1
    docker-compose:
      version: v2.2.2
  images:
  - docker.io/calico/cni:v3.20.0
  - docker.io/calico/kube-controllers:v3.20.0
  - docker.io/calico/node:v3.20.0
  - docker.io/calico/pod2daemon-flexvol:v3.20.0
  - docker.io/coredns/coredns:1.8.0
  - docker.io/kubesphere/k8s-dns-node-cache:1.15.12
  - docker.io/kubesphere/kube-apiserver:v1.21.5
  - docker.io/kubesphere/kube-controller-manager:v1.21.5
  - docker.io/kubesphere/kube-proxy:v1.21.5
  - docker.io/kubesphere/kube-scheduler:v1.21.5
  - docker.io/kubesphere/pause:3.4.1

so will there have a verison list like this version_lists.txt for kubekey v2.2.2 kubernetes v1.24.3 kubesphere v3.3.0

docker: v20.10.8
cni :v 0.9.1
  images:
  - docker.io/calico/cni:v3.20.0

version_lists.txt for kubekey v2.2.3 kubernetes v1.24.4 kubesphere v3.3.1

docker: v20.10.9
cni :v 0.9.2
  images:
  - docker.io/calico/cni:v3.23.0
24sama commented 2 years ago
  1. k8s v1.24+ which is installed by kk does not support using docker.
  2. Just use kk to create an expected k8s cluster for a test, and then you can use kk create manifest to automatically generate a manifest file.
willzhang commented 2 years ago

1、create manifest-sample.yaml by hand

root@ubuntu:/data/kubesphere/v3.3.0# cat manifest-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches: 
  - amd64
  operatingSystems: 
  - arch: amd64
    type: linux
    id: ubuntu
    version: "22.04"
    osImage: Ubuntu 22.04 LTS
    repository: 
      iso:
        localPath: 
        url: https://github.com/kubesphere/kubekey/releases/download/v2.2.2/ubuntu-22.04-debs-amd64.iso
  kubernetesDistributions: 
  - type: kubernetes
    version: v1.21.5
  components: 
    helm:
      version: v3.6.3
    cni:
      version: v0.9.1
    etcd:
      version: v3.4.13
    containerRuntimes:
    - type: containerd
      version: 1.6.4
    crictl:
      version: v1.22.0
    harbor:
      version: v2.4.1
    docker-compose:
      version: v2.2.2
  images:
  - docker.io/calico/cni:v3.20.0
  - docker.io/calico/kube-controllers:v3.20.0
  - docker.io/calico/node:v3.20.0
  - docker.io/calico/pod2daemon-flexvol:v3.20.0
  - docker.io/coredns/coredns:1.8.0
  - docker.io/kubesphere/k8s-dns-node-cache:1.15.12
  - docker.io/kubesphere/kube-apiserver:v1.21.5
  - docker.io/kubesphere/kube-controller-manager:v1.21.5
  - docker.io/kubesphere/kube-proxy:v1.21.5
  - docker.io/kubesphere/kube-scheduler:v1.21.5
  - docker.io/kubesphere/pause:3.4.1

2、generate config-sample.yaml

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.72.40, internalAddress: 192.168.72.40, user: root, password: "123456"}
  - {name: node2, address: 192.168.72.41, internalAddress: 192.168.72.41, user: root, password: "123456"}
  - {name: node3, address: 192.168.72.42, internalAddress: 192.168.72.42, user: root, password: "123456"}
  - {name: harbor, address: 192.168.72.43, internalAddress: 192.168.72.43, user: root, password: "123456"}
  roleGroups:
    etcd:
    - node1
    control-plane: 
    - node1
    worker:
    - node1
    - node2
    - node3
    registry:
    - harbor
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.21.5
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    multusCNI:
      enabled: false
  registry:
    type: harbor
    auths:
      "dockerhub.kubekey.local":
        username: admin
        password: Harbor12345
    privateRegistry: "dockerhub.kubekey.local"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
  addons: []

3、offline install,

kubekey will downloading amd64 crictl v1.24.0 from internet, so what's wrong, The Wrong components version selection and configuration makes me very painful and confused.

root@ubuntu:/data/kubesphere/v3.3.0# kk create cluster -f config-sample.yaml -a kubernetes-v1.21.5-artifact.tar.gz --with-packages

 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

20:59:17 CST [GreetingsModule] Greetings
20:59:17 CST message: [node3]
Greetings, KubeKey!
20:59:19 CST message: [harbor]
Greetings, KubeKey!
20:59:19 CST message: [node1]
Greetings, KubeKey!
20:59:20 CST message: [node2]
Greetings, KubeKey!
20:59:20 CST success: [node3]
20:59:20 CST success: [harbor]
20:59:20 CST success: [node1]
20:59:20 CST success: [node2]
20:59:20 CST [NodePreCheckModule] A pre-check on nodes
20:59:20 CST success: [node3]
20:59:20 CST success: [node1]
20:59:20 CST success: [node2]
20:59:20 CST success: [harbor]
20:59:20 CST [ConfirmModule] Display confirmation form
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+---------+------------+------------+-------------+------------------+--------------+
| name   | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker  | containerd | nfs client | ceph client | glusterfs client | time         |
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+---------+------------+------------+-------------+------------------+--------------+
| node1  | y    | y    | y       | y        | y     | y     | y       | y         | y      |         |            |            |             |                  | CST 20:59:20 |
| harbor | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.8 | v1.4.9     |            |             |                  | CST 20:59:20 |
| node2  | y    | y    | y       | y        | y     | y     | y       | y         | y      |         |            |            |             |                  | CST 20:59:20 |
| node3  | y    | y    | y       | y        | y     | y     | y       | y         | y      |         |            |            |             |                  | CST 20:59:20 |
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+---------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
20:59:23 CST success: [LocalHost]
20:59:23 CST [UnArchiveArtifactModule] Check the KubeKey artifact md5 value
20:59:25 CST success: [LocalHost]
20:59:25 CST [UnArchiveArtifactModule] UnArchive the KubeKey artifact
20:59:25 CST skipped: [LocalHost]
20:59:25 CST [UnArchiveArtifactModule] Create the KubeKey artifact Md5 file
..................................................
20:59:31 CST message: [localhost]
downloading amd64 kubecni v0.9.1 ...
20:59:32 CST message: [localhost]
kubecni is existed
20:59:32 CST message: [localhost]
downloading amd64 crictl v1.24.0 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
 30 13.8M   30 4387k    0     0  10966      0  0:22:04  0:06:49  0:15:15 17572
24sama commented 2 years ago

Sorry to make you feel pain.

I see you finally created a k8s v1.24.0. But your manifest file shows:

image

These 2 components' versions maybe not be your expected cluster needed.

Because kk supports the installation of many versions of components, the effort now prevents us from providing good best practices for all versions, so it is up to the user to try and configure them themselves.

willzhang commented 2 years ago

i change crictl to version: v1.24.0, but kubekey need kube-controllers:v3.23.2

root@ubuntu:/data/kubesphere/v3.3.0# cat manifest-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches: 
  - amd64
  operatingSystems: 
  - arch: amd64
    type: linux
    id: ubuntu
    version: "22.04"
    osImage: Ubuntu 22.04 LTS
    repository: 
      iso:
        localPath: 
        url: https://github.com/kubesphere/kubekey/releases/download/v2.2.2/ubuntu-22.04-debs-amd64.iso
  kubernetesDistributions: 
  - type: kubernetes
    version: v1.21.5
  components: 
    helm:
      version: v3.6.3
    cni:
      version: v0.9.1
    etcd:
      version: v3.4.13
    containerRuntimes:
    - type: containerd
      version: 1.6.4
    crictl:
      version: v1.24.0
    harbor:
      version: v2.4.1
    docker-compose:
      version: v2.2.2
  images:
  - docker.io/calico/cni:v3.20.0
  - docker.io/calico/kube-controllers:v3.20.0
  - docker.io/calico/node:v3.20.0
  - docker.io/calico/pod2daemon-flexvol:v3.20.0
  - docker.io/coredns/coredns:1.8.0
  - docker.io/kubesphere/k8s-dns-node-cache:1.15.12
  - docker.io/kubesphere/kube-apiserver:v1.21.5
  - docker.io/kubesphere/kube-controller-manager:v1.21.5
  - docker.io/kubesphere/kube-proxy:v1.21.5
  - docker.io/kubesphere/kube-scheduler:v1.21.5
  - docker.io/kubesphere/pause:3.4.1

the error logs

$: kk create cluster -f config-sample.yaml -a kubernetes-v1.21.5.tar.gz --with-packages
......
10:07:24 CST message: [node2]
downloading image: dockerhub.kubekey.local/kubesphereio/pause:3.4.1
10:07:24 CST message: [node3]
downloading image: dockerhub.kubekey.local/kubesphereio/pause:3.4.1
10:07:24 CST message: [node2]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.21.5
10:07:24 CST message: [node3]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.21.5
10:07:25 CST message: [node2]
downloading image: dockerhub.kubekey.local/kubesphereio/coredns:1.8.0
10:07:25 CST message: [node3]
downloading image: dockerhub.kubekey.local/kubesphereio/coredns:1.8.0
10:07:25 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/pause:3.4.1
10:07:25 CST message: [node2]
downloading image: dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12
10:07:25 CST message: [node3]
downloading image: dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12
10:07:25 CST message: [node2]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2
10:07:25 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.21.5
10:07:25 CST message: [node2]
pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH crictl pull dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2" 
E0909 10:07:25.740734    5105 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": failed to resolve reference \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found" image="dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2"
FATA[0000] pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found: Process exited with status 1
10:07:25 CST message: [node3]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2
10:07:25 CST message: [node3]
pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH crictl pull dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2" 
E0909 10:07:25.910396    4891 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": failed to resolve reference \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found" image="dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2"
FATA[0000] pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found: Process exited with status 1
10:07:25 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.21.5
10:07:26 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.21.5
10:07:26 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.21.5
10:07:26 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/coredns:1.8.0
10:07:38 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12
10:07:40 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2
10:07:40 CST message: [node1]
pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH crictl pull dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2" 
E0909 10:07:40.975402    4965 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": failed to resolve reference \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found" image="dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2"
FATA[0000] pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found: Process exited with status 1
10:07:40 CST retry: [node1]
10:07:45 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/pause:3.4.1
10:07:46 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.21.5
10:07:47 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.21.5
10:07:47 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.21.5
10:07:48 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.21.5
10:07:48 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/coredns:1.8.0
10:07:48 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12
10:07:49 CST message: [node1]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2
10:07:49 CST message: [node1]
pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH crictl pull dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2" 
E0909 10:07:49.201332    5014 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": failed to resolve reference \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found" image="dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2"
FATA[0000] pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found: Process exited with status 1
10:07:49 CST success: [harbor]
10:07:49 CST failed: [node2]
10:07:49 CST failed: [node3]
10:07:49 CST failed: [node1]
error: Pipeline[CreateClusterPipeline] execute failed: Module[PullModule] exec failed: 
failed: [node2] [PullImages] exec failed after 3 retires: pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH crictl pull dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2" 
E0909 10:07:25.740734    5105 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": failed to resolve reference \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found" image="dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2"
FATA[0000] pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found: Process exited with status 1
failed: [node3] [PullImages] exec failed after 3 retires: pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH crictl pull dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2" 
E0909 10:07:25.910396    4891 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": failed to resolve reference \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found" image="dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2"
FATA[0000] pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found: Process exited with status 1
failed: [node1] [PullImages] exec failed after 3 retires: pull image failed: Failed to exec command: sudo -E /bin/bash -c "env PATH=$PATH crictl pull dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2" 
E0909 10:07:49.201332    5014 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": failed to resolve reference \"dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2\": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found" image="dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2"
FATA[0000] pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2": dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2: not found: Process exited with status 1
24sama commented 2 years ago

Oh, kk v2.2.2 bump calico and openebs version. Please see the same issue: https://github.com/kubesphere/kubekey/issues/1488#issuecomment-1237605418

And here is the PR to update the document: https://github.com/kubesphere/website/pull/2662

willzhang commented 2 years ago

Sorry to make you feel pain.

I see you finally created a k8s v1.24.0. But your manifest file shows: image These 2 components' versions maybe not be your expected cluster needed.

Because kk supports the installation of many versions of components, the effort now prevents us from providing good best practices for all versions, so it is up to the user to try and configure them themselves.

I want to find the rules between versions. If someone can answer the ? mark in the picture below, maybe we can sort out a basic best practice

kubekey_v2.2.2.png

willzhang commented 2 years ago

when kubernetes version Version determined , some components version maybe determined

containerd https://containerd.io/releases/#kubernetes-support

etcd

root@ubuntu:~# kubeadm config images list
I0909 12:49:08.871196  708524 version.go:255] remote version is much newer: v1.25.0; falling back to: stable-1.22
k8s.gcr.io/kube-apiserver:v1.22.13
k8s.gcr.io/kube-controller-manager:v1.22.13
k8s.gcr.io/kube-scheduler:v1.22.13
k8s.gcr.io/kube-proxy:v1.22.13
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4

calico https://projectcalico.docs.tigera.io/getting-started/kubernetes/requirements#kubernetes-requirements

24sama commented 2 years ago

@willzhang thanks for this suggestion. We are also keen to resolve these complex version dependencies. At present, I think we can write a document to show these dependencies first that can temporarily solve some users' confusion.

24sama commented 2 years ago

/kind feature-request