kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.39k stars 555 forks source link

kk create cluster offline with the artifact get an error #1770

Open zj452008181 opened 1 year ago

zj452008181 commented 1 year ago

What is version of KubeKey has the issue?

3.0.2,3.0.7

What is your os environment?

centos 7.9

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: cluster1
spec:
  hosts:
    - {name: master1, address: 10.25.7.18, internalAddress: 10.25.7.18, user: root, password: }
    - {name: master2, address: 10.25.7.98, internalAddress: 10.25.7.98, user: root, password: }
    - {name: master3, address: 10.25.7.81, internalAddress: 10.25.7.81, user: root, password: }
    - {name: worker1, address: 10.25.7.169, internalAddress: 10.25.7.169, user: root, password: }
    - {name: worker2, address: 10.25.7.60, internalAddress: 10.25.7.60, user: root, password: }
  roleGroups:
    etcd:
      - master[1:3]
    master:
      - master[1:3]
    worker:
      - worker[1:2]
#    registry:
#      - master3
  controlPlaneEndpoint:
    ##Internal loadbalancer for apiservers
    internalLoadbalancer: haproxy
    domain: lb.kubesphere.local
    address: ""      # The IP address of your load balancer.
    port: 6443
  system:
    ntpServers: #  The ntp servers of chrony.
      - ntp2.aliyun.com
      - ntp.aliyun.com
      - ntp3.aliyun.com # Set the node name in `hosts` as ntp server if no public ntp servers access.
    timezone: "Asia/Shanghai"
  kubernetes:
    version: v1.22.16
    imageRepo: kubesphere
    clusterName: dev.cluster.local
    autoRenewCerts: true
    masqueradeAll: false
    maxPods: 120
    nodeCidrMaskSize: 24
    proxyMode: ipvs
    featureGates: # enable featureGates, [Default: {"ExpandCSIVolumes":true,"RotateKubeletServerCertificate": true,"CSIStorageCapacity":true, "TTLAfterFinished":true}]
      RotateKubeletServerCertificate: true
      RemoveSelfLink: false
      TTLAfterFinished: true
  etcd:
    type: kubekey  # Specify the type of etcd used by the cluster. When the cluster type is k3s, setting this parameter to kubeadm is invalid. [kubekey | kubeadm | external] [Default: kubekey]
    external:
      endpoints:
        - https://10.25.7.18:2379
        - https://10.25.7.98:2379
        - https://10.25.7.81:2379
      caFile: /etc/ssl/etcd/ssl/ca.pem
      certFile: /etc/ssl/etcd/ssl/admin-etcd1.pem
      keyFile: /etc/ssl/etcd/ssl/admin-etcd1-key.pem
  network:
    plugin: calico
    calico:
      ipipMode: Always
      vxlanMode: Never
      vethMTU: 1440
    kubePodsCIDR: 172.19.24.0/18
    kubeServiceCIDR: 172.13.0.0/18
  registry:
    type: harbor
    registryMirrors: ["https://6ab7z4a4.mirror.aliyuncs.com"]
    insecure-registries: ["10.0.7.147:30002","10.0.7.148:30002","10.0.7.149:30002","10.0.7.150:30002","10.0.7.151:30002","harbor.jusda.int","harbor.jusda.int"]
    privateRegistry: "harbor.jusda.int"
   # namespaceOverride: "kubesphere"
    auths: # if docker add by `docker login`, if containerd append to `/etc/containerd/config.toml`
      "harbor.jusda.int":
        username: admin
        password: 
        skipTLSVerify: true # Allow contacting registries over HTTPS with failed TLS verification.
        plainHTTP: false # Allow contacting registries over HTTP.
       # certsPath: "/etc/docker/certs.d/dockerhub.kubekey.local" # Use certificates at path (*.crt, *.cert, *.key) to connect to the registry.
  addons:
  - name: nfs-client
    namespace: kube-system
    sources: 
      chart: 
        name: nfs-client-provisioner
        repo: https://charts.kubesphere.io/main
        valuesFile: nfs-client-provider.yaml
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.3.1
spec:
  persistence:
    storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
  authentication:
    jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
  local_registry: "harbor.jusda.int"        # Add your private registry address if it is needed.
  # dev_tag: ""               # Add your kubesphere image tag you want to install, by default it's same as ks-install release version.
  etcd:
    monitoring: true       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
    endpointIps: 10.25.7.18,10.25.7.98,10.25.7.81  # etcd cluster EndpointIps. It can be a bunch of IPs here.
    port: 2379              # etcd port.
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
        port: 30880
        type: NodePort
    # apiserver:            # Enlarge the apiserver and controller manager's resource requests and limits for the large cluster
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      volumeSize: 2Gi # Redis PVC size.
    openldap:
      enabled: true
      volumeSize: 2Gi   # openldap PVC size.
    minio:
      volumeSize: 20Gi # Minio PVC size.
    monitoring:
      # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
      GPUMonitoring:     # Enable or disable the GPU-related metrics. If you enable this switch but have no GPU resources, Kubesphere will set it to zero. 
        enabled: true
    gpu:                 # Install GPUKinds. The default GPU kind is nvidia.com/gpu. Other GPU kinds can be added here according to your needs. 
      kinds:         
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:   # Storage backend for logging, events and auditing.
      # master:
      #   volumeSize: 4Gi  # The volume size of Elasticsearch master nodes.
      #   replicas: 1      # The total number of master nodes. Even numbers are not allowed.
      #   resources: {}
      # data:
      #   volumeSize: 20Gi  # The volume size of Elasticsearch data nodes.
      #   replicas: 1       # The total number of data nodes.
      #   resources: {}
      logMaxAge: 7             # Log retention time in built-in Elasticsearch. It is 7 days by default.
      elkPrefix: logstash      # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""
  alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: true         # Enable or disable the KubeSphere Alerting System.
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
    enabled: true         # Enable or disable the KubeSphere Auditing Log System.
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: true             # Enable or disable the KubeSphere DevOps System.
    # resources: {}
    jenkinsMemoryLim: 6Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 2000m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 4000m
    jenkinsJavaOpts_MaxRAM: 4g
  events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: true         # Enable or disable the KubeSphere Events System.
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    ruler:
      enabled: true
      replicas: 1
    #   resources: {}
  logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: false         # Enable or disable the KubeSphere Logging System.
    containerruntime: docker
    logsidecar:
      enabled: true
      replicas: 1
      # resources: {}
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    enabled: true                   # Enable or disable metrics-server.
  monitoring:
    storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    prometheus:
      replicas: 1  # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
      volumeSize: 20Gi  # Prometheus PVC size.
    #   resources: {}
    #   operator:
    #     resources: {}
    #   adapter:
    #     resources: {}
    # node_exporter:
    #   resources: {}
    alertmanager:
      replicas: 1          # AlertManager Replicas.
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:                           # GPU monitoring-related plugins installation.
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
  network:
    networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
      # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
      enabled: true # Enable or disable network policies.
    ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
      type: calico # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
    topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
      type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
  openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    store:
      enabled: true # Enable or disable the KubeSphere App Store.
  servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    enabled: false     # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
  kubeedge:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
    enabled: false   # Enable or disable KubeEdge.
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
          - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

A clear and concise description of what happend.

14:20:47 CST success: [LocalHost] 14:20:47 CST [CopyImagesToRegistryModule] Push multi-arch manifest to private registry 14:20:47 CST Push multi-arch manifest list: harbor.jusda.int/kubesphere/ks-apiserver:v3.3.1 INFO[0154] Retrieving digests of member images
INFO[0154] trying next host error="failed to authorize: failed to fetch oauth token: unable to decode token response: invalid character 'U' looking for beginning of value" host=harbor.jusda.int 14:20:47 CST message: [LocalHost] push image harbor.jusda.int/kubesphere/ks-apiserver:v3.3.1 multi-arch manifest failed: Inspect of image "harbor.jusda.int/kubesphere/ks-apiserver:v3.3.1-amd64" failed with error: failed to authorize: failed to fetch oauth token: unable to decode token response: invalid character 'U' looking for beginning of value 14:20:47 CST failed: [LocalHost] error: Pipeline[CreateClusterPipeline] execute failed: Module[CopyImagesToRegistryModule] exec failed: failed: [LocalHost] [PushManifest] exec failed after 1 retires: push image harbor.jusda.int/kubesphere/ks-apiserver:v3.3.1 multi-arch manifest failed: Inspect of image "harbor.jusda.int/kubesphere/ks-apiserver:v3.3.1-amd64" failed with error: failed to authorize: failed to fetch oauth token: unable to decode token response: invalid character 'U' looking for beginning of value

Relevant log output

No response

Additional information

No response

pixiake commented 1 year ago
failed to authorize: failed to fetch oauth token: unable to decode token response: invalid character 'U' looking for beginning of value

It seems to have something to do with authorize. Can you login to this registry directly by using docker login harbor.jusda.int?

zj452008181 commented 1 year ago

yes,i think there is some error in the artifact package,sometimes the images index.json file type was wrong,some keys were lost;and i treid run "kk create manifest manytimes",at finnal it success.