kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.39k stars 555 forks source link

Failed to scale all-in-one to multi-node #52

Closed FeynmanZhou closed 4 years ago

FeynmanZhou commented 4 years ago

When I try to scale all-in-one to multi-node, it runs Failed to exec command: sudo -E docker pull kubekey/kube-proxy:v1.17.6: Process exited with status 1.

Do I need to install docker on each node?

[node1] Downloading image: kubekey/kube-apiserver:v1.17.6
ERRO[13:44:44 CST] Failed to download image: kubekey/kube-proxy:v1.17.6: Failed to exec command: sudo -E docker pull kubekey/kube-proxy:v1.17.6: Process exited with status 1  node=192.168.0.3
ERRO[13:44:45 CST] Failed to download image: kubekey/kube-apiserver:v1.17.6: Failed to exec command: sudo -E docker pull kubekey/kube-apiserver:v1.17.6: Process exited with status 1  node=192.168.0.2
ERRO[13:44:58 CST] Failed to download image: kubekey/kube-proxy:v1.17.6: Failed to exec command: sudo -E docker pull kubekey/kube-proxy:v1.17.6: Process exited with status 1  node=192.168.0.4
WARN[13:44:58 CST] Task failed ...
WARN[13:44:58 CST] error: interrupted by error
Error: Failed to pre-download images: interrupted by error
Usage:
  kk scale [flags]

My config.yaml as follows:

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: example
spec:
  hosts:
  - {name: node1, address: 192.168.0.2, internalAddress: 192.168.0.2, password: Qcloud@123}
  - {name: node2, address: 192.168.0.3, internalAddress: 192.168.0.3, password: Qcloud@123}
  - {name: node3, address: 192.168.0.4, internalAddress: 192.168.0.4, password: Qcloud@123}
  roleGroups:
    etcd:
     - node1
    master:
     - node1
    worker:
     - node1
     - node2
     - node3
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: "6443"
  kubernetes:
    version: v1.17.6
    imageRepo: kubekey
    clusterName: cluster.local
  network:
    plugin: calico
    podNetworkCidr: 10.233.64.0/18
    serviceNetworkCidr: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
  storage:
    defaultStorageClass: localVolume
    nfsClient:
      nfsServer: 172.16.0.2
      nfsPath: /mnt/nfs
      nfsVrs3Enabled: false
      nfsArchiveOnDelete: false
  kubesphere:
    console:
      enableMultiLogin: false  # enable/disable multi login
      port: 30880
    common:
      mysqlVolumeSize: 20Gi
      minioVolumeSize: 20Gi
      etcdVolumeSize: 20Gi
      openldapVolumeSize: 2Gi
      redisVolumSize: 2Gi
    monitoring:
      prometheusReplicas: 1
      prometheusMemoryRequest: 400Mi
      prometheusVolumeSize: 20Gi
      grafana:
        enabled: false
    logging:
      enabled: false
      elasticsearchMasterReplicas: 1
      elasticsearchDataReplicas: 1
      logsidecarReplicas: 2
      elasticsearchMasterVolumeSize: 4Gi
      elasticsearchDataVolumeSize: 20Gi
      logMaxAge: 7
      elkPrefix: logstash
      containersLogMountedPath: ""
      kibana:
        enabled: false
    openpitrix:
      enabled: false
    devops:
      enabled: false
      jenkinsMemoryLim: 2Gi
      jenkinsMemoryReq: 1500Mi
      jenkinsVolumeSize: 8Gi
      jenkinsJavaOpts_Xms: 512m
      jenkinsJavaOpts_Xmx: 512m
      jenkinsJavaOpts_MaxRAM: 2g
      sonarqube:
        enabled: false
        postgresqlVolumeSize: 8Gi
    notification:
      enabled: false
    alerting:
      enabled: false
    serviceMesh:
      enabled: false
    metricsServer:
      enabled: false
pixiake commented 4 years ago

These imags are in kubesphere‘s repository. You should to create config.yaml by ./kk create config.

FeynmanZhou commented 4 years ago

These imags are in kubesphere‘s repository. You should to create config.yaml by ./kk create config.

Okay, it's resolved. I was misunderstood initially.