kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.37k stars 550 forks source link

containerd does not apply insecureRegistries in config.toml #1360

Open Jaycean opened 2 years ago

Jaycean commented 2 years ago

What is version of KubeKey has the issue?

version.BuildInfo{Version:"latest+unreleased", GitCommit:"b21bdd4d858b87d9c5e93150a7c1fb5495eeed24", GitTreeState:"dirty", GoVersion:"go1.17.11"}

What is your os environment?

centos 7

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 10.1.30.158, internalAddress: 10.1.30.158, user: root, password: "qwer1234"}
  roleGroups:
    etcd:
    - node1
    control-plane: 
    - node1
    worker:
    - node1
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.18.4-k3s
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: "192.168.5.61:1080"
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: [192.168.5.61:1080]
    auths: # if docker add by `docker login`, if containerd append to `/etc/containerd/config.toml`
      "192.168.5.61:1080":
        username: "admin"
        password: "Harbor12345"
        skipTLSVerify: false # Allow contacting registries over HTTPS with failed TLS verification.
        plainHTTP: true # Allow contacting registries over HTTP.
  addons: []

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.0.0
spec:
  zone: ""
  local_registry: ""
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  etcd:
    monitoring: true
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    es:
      elasticsearchDataVolumeSize: 20Gi
      elasticsearchMasterVolumeSize: 4Gi
      elkPrefix: logstash
      logMaxAge: 7
    mysqlVolumeSize: 20Gi
    minioVolumeSize: 20Gi
    etcdVolumeSize: 20Gi
    openldapVolumeSize: 2Gi
    redisVolumSize: 2Gi
  console:
    enableMultiLogin: false  # enable/disable multi login
    port: 30880
  alerting:
    enabled: false
  auditing:
    enabled: false
  devops:
    enabled: false
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 512m
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:
    enabled: false
    ruler:
      enabled: true
      replicas: 2
  logging:
    enabled: false
    logsidecarReplicas: 2
  metrics_server:
    enabled: true
  monitoring:
    prometheusMemoryRequest: 400Mi
    prometheusVolumeSize: 20Gi
  multicluster:
    clusterRole: none  # host | member | none
  networkpolicy:
    enabled: false
  notification:
    enabled: false
  openpitrix:
    enabled: false
  servicemesh:
    enabled: false

A clear and concise description of what happend.

/var/lib/rancher/k3s/agent/etc/containerd/config.toml log

[plugins.opt]
  path = "/var/lib/rancher/k3s/agent/containerd"

[plugins.cri]
  stream_server_address = "127.0.0.1"
  stream_server_port = "10010"
  enable_selinux = false
  sandbox_image = "192.168.5.61:1080/kubesphere/pause:3.2"

[plugins.cri.containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"

Relevant log output

Warning  FailedCreatePodSandBox  0s (x5 over 49s)  kubelet, node1     
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image 

"192.168.5.61:1080/kubesphere/pause:3.2": failed to pull image 

"192.168.5.61:1080/kubesphere/pause:3.2": failed to pull and unpack image 

"192.168.5.61:1080/kubesphere/pause:3.2": failed to resolve reference 

"192.168.5.61:1080/kubesphere/pause:3.2": failed to do request: Head 

https://192.168.5.61:1080/v2/kubesphere/pause/manifests/3.2: http: server gave HTTP response to HTTPS client

Additional information

No response

24sama commented 2 years ago

Hi @Jaycean KK will do nothing about container runtime when installing a k3s cluster. The containerd which is installed in your cluster is installed by the k3s. So, you need to refer to the k3s document. Here are some documents I found, that maybe you needed: https://rancher.com/docs/k3s/latest/en/installation/private-registry/ https://rancher.com/docs/k3s/latest/en/installation/airgap/ https://rancher.com/docs/k3s/latest/en/advanced/#configuring-containerd

Jaycean commented 2 years ago

Hi @24sama Thks,I have solved this problem by manually modifying the configuration, but I am curious about why the problem fixed by PR #1271 does not take effect. I need further tests

24sama commented 2 years ago

This PR #1271 is related to the containerd installed by kk.

Jaycean commented 2 years ago

This PR #1271 is related to the containerd installed by kk.

  • About Kubernetes: KK will install the containerd and configure it. This PR is for this case.
  • About K3s: KK only exec the k3s official install script and the containerd is installed by k3s that couldn't be managed by kk.

Yes, I tested that the configuration of KK for containerd did not take effect. I don't know why. When I have time, I'll take a further test to see what's wrong.

24sama commented 2 years ago

This PR #1271 is related to the containerd installed by kk.

  • About Kubernetes: KK will install the containerd and configure it. This PR is for this case.
  • About K3s: KK only exec the k3s official install script and the containerd is installed by k3s that couldn't be managed by kk.

Yes, I tested that the configuration of KK for containerd did not take effect. I don't know why. When I have time, I'll take a further test to see what's wrong.

Other information: KK only does a configuration task when the node didn't have a containerd. If you manually installed the containerd or docker at first, kk will skip the configuration task.

Jaycean commented 2 years ago

This PR #1271 is related to the containerd installed by kk.

  • About Kubernetes: KK will install the containerd and configure it. This PR is for this case.
  • About K3s: KK only exec the k3s official install script and the containerd is installed by k3s that couldn't be managed by kk.

Yes, I tested that the configuration of KK for containerd did not take effect. I don't know why. When I have time, I'll take a further test to see what's wrong.

Other information: KK only does a configuration task when the node didn't have a containerd. If you manually installed the containerd or docker at first, kk will skip the configuration task.

What I can confirm is that I have not pre installed docker and containerd Thks.

24sama commented 2 years ago

About K3s: KK only exec the k3s official install script and the containerd is installed by k3s that couldn't be managed by kk.

So about this, kk will do nothing about configuring the containerd. That is expected. Because the containerd didn't installed by kk.

Other information: KK only does a configuration task when the node didn't have a containerd. If you manually installed the containerd or docker at first, kk will skip the configuration task.

This logic also matches this information.

Therefore, unfortunately, if you want to use kk to install a k3s cluster, you need set containerd manually.

xiaods commented 2 years ago

you can use k8e, it integrate with containerd.

curl -sfL https://getk8e.com/install.sh | K8E_TOKEN=ilovek8e INSTALL_K8E_EXEC="server --cluster-init --write-kubeconfig-mode 644" sh -