kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.33k stars 547 forks source link

请求支持传递 --apiserver-cert-extra-sans 参数到 kubeadm #1679

Open StringKe opened 1 year ago

StringKe commented 1 year ago

Your current KubeKey version

kk version: &version.Info{Major:"3", Minor:"0", GitVersion:"v3.0.6", GitCommit:"ec903fe13dfed73ffd3f72f4beec3123675ce4d0", GitTreeState:"clean", BuildDate:"2023-01-03T07:28:42Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}

Describe this feature

我在整个集群内有部分机器并不是 k8s 集群的一部分,但希望通过 kubectl 来访问集群。

自己扩展 证书 颁发 ip 的过程比较复杂,且不知道是否会对当前 kubesphere 会有什么影响。

Describe the solution you'd like

在 kubekey 的 yaml 中可以配置额外的证书 ip

Additional information

No response

pixiake commented 1 year ago

--apiserver-cert-extra-sans is already supported.

You can configure spec.kubernetes.apiserverCertExtraSans.

https://github.com/kubesphere/kubekey/blob/ec903fe13dfed73ffd3f72f4beec3123675ce4d0/cmd/kk/apis/kubekey/v1alpha2/kubernetes_types.go#L32

StringKe commented 1 year ago

我尝试,直接修改 clusterconfigurations.ks-install 但在本地 kubectl 依旧提示

Unable to connect to the server: x509: certificate is valid for 10.233.0.1, 172.16.0.17, 127.0.0.1, 172.16.0.18, 172.16.0.19, 172.16.0.20, 172.16.0.21, 172.16.0.22, 172.16.0.23, 172.16.0.24, not 103.x.x.x
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"installer.kubesphere.io/v1alpha1","kind":"ClusterConfiguration","metadata":{"annotations":{},"labels":{"version":"v3.3.1"},"name":"ks-installer","namespace":"kubesphere-system"},"spec":{"alerting":{"enabled":true},"auditing":{"enabled":true},"authentication":{"jwtSecret":""},"common":{"core":{"console":{"enableMultiLogin":true,"port":30880,"type":"NodePort"}},"es":{"basicAuth":{"enabled":false,"password":"","username":""},"elkPrefix":"logstash","externalElasticsearchHost":"","externalElasticsearchPort":"","logMaxAge":7},"gpu":{"kinds":[{"default":true,"resourceName":"nvidia.com/gpu","resourceType":"GPU"}]},"minio":{"volumeSize":"20Gi"},"monitoring":{"GPUMonitoring":{"enabled":false},"endpoint":"http://prometheus-operated.kubesphere-monitoring-system.svc:9090"},"openldap":{"enabled":false,"volumeSize":"2Gi"},"redis":{"enabled":false,"volumeSize":"2Gi"}},"devops":{"enabled":false,"jenkinsMemoryLim":"8Gi","jenkinsMemoryReq":"4Gi","jenkinsVolumeSize":"8Gi"},"edgeruntime":{"enabled":true,"kubeedge":{"cloudCore":{"cloudHub":{"advertiseAddress":[""]},"service":{"cloudhubHttpsNodePort":"30002","cloudhubNodePort":"30000","cloudhubQuicNodePort":"30001","cloudstreamNodePort":"30003","tunnelNodePort":"30004"}},"enabled":true,"iptables-manager":{"enabled":true,"mode":"external"}}},"etcd":{"endpointIps":"172.16.0.17,172.16.0.18,172.16.0.19","monitoring":false,"port":2379,"tlsEnable":true},"events":{"enabled":true},"logging":{"enabled":true,"logsidecar":{"enabled":true,"replicas":2}},"metrics_server":{"enabled":true},"monitoring":{"gpu":{"nvidia_dcgm_exporter":{"enabled":false}},"storageClass":"","worknode_exporter":{"port":9100}},"multicluster":{"clusterRole":"none"},"network":{"ippool":{"type":"none"},"networkpolicy":{"enabled":true},"topology":{"type":"none"}},"openpitrix":{"store":{"enabled":true}},"persistence":{"storageClass":""},"servicemesh":{"enabled":true,"istio":{"components":{"cni":{"enabled":false},"ingressGateways":[{"enabled":false,"name":"istio-ingressgateway"}]}}},"terminal":{"timeout":600},"zone":"cn"}}
  labels:
    version: v3.3.1
  name: ks-installer
  namespace: kubesphere-system
spec:
  alerting:
    enabled: true
  auditing:
    enabled: true
  authentication:
    jwtSecret: ''
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    es:
      basicAuth:
        enabled: false
        password: ''
        username: ''
      elkPrefix: logstash
      externalElasticsearchHost: ''
      externalElasticsearchPort: ''
      logMaxAge: 7
    gpu:
      kinds:
        - default: true
          resourceName: nvidia.com/gpu
          resourceType: GPU
    minio:
      volumeSize: 20Gi
    monitoring:
      GPUMonitoring:
        enabled: false
      endpoint: 'http://prometheus-operated.kubesphere-monitoring-system.svc:9090'
    openldap:
      enabled: false
      volumeSize: 2Gi
    redis:
      enabled: false
      volumeSize: 2Gi
  devops:
    enabled: false
    jenkinsMemoryLim: 8Gi
    jenkinsMemoryReq: 4Gi
    jenkinsVolumeSize: 8Gi
  edgeruntime:
    enabled: true
    kubeedge:
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ''
        service:
          cloudhubHttpsNodePort: '30002'
          cloudhubNodePort: '30000'
          cloudhubQuicNodePort: '30001'
          cloudstreamNodePort: '30003'
          tunnelNodePort: '30004'
      enabled: true
      iptables-manager:
        enabled: true
        mode: external
  etcd:
    endpointIps: '172.16.0.17,172.16.0.18,172.16.0.19'
    monitoring: false
    port: 2379
    tlsEnable: true
  events:
    enabled: true
  kubernetes:
    apiserverCertExtraSans:
      - 103.x.x.x
  logging:
    enabled: true
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:
    enabled: true
  monitoring:
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
    storageClass: ''
    worknode_exporter:
      port: 9100
  multicluster:
    clusterRole: none
  network:
    ippool:
      type: calico
    networkpolicy:
      enabled: true
    topology:
      type: weave-scope
  openpitrix:
    store:
      enabled: true
  persistence:
    storageClass: ''
  servicemesh:
    enabled: true
    istio:
      components:
        cni:
          enabled: true
        ingressGateways:
          - enabled: true
            name: istio-ingressgateway
  telemetry_enabled: false
  terminal:
    timeout: 600
  zone: cn
24sama commented 1 year ago

Maybe you need to use kk to delete the cluster, that will clear your environment. And then recreate the cluster.

StringKe commented 1 year ago

Maybe you need to use kk to delete the cluster, that will clear your environment. And then recreate the cluster.

删除集群如何保留数据呢?

我没配置任何的存储类,默认存储中的数据是否会被覆盖?

24sama commented 1 year ago

OK I got it. You are not creating a new cluster, but modifying an existing one. So, you just need to sign a new cert and modify the kubeadm config. Like that: https://developer.aliyun.com/article/1094847

StringKe commented 1 year ago

如果按照这个修改,我后去修改当前集群的 ks-install 配置,他会需要重新配置么?

24sama commented 1 year ago

I don't know why need to modify the ClusterConfiguration. The certs only correspond to Kubernetes, not to the KubeSphere. KubeSphere is just an app above Kubernetes and doesn't care about the cluster certs.

StringKe commented 1 year ago

在集群创建之后没配置额外的 ip,现在其他的工具需要以 kubeconfig 的方式访问集群,这个时候访问 ip 并不在证书允许的范围内。

24sama commented 1 year ago

kk only supports adding apiserver-cert-extra-sans when creating a new cluster as the @pixiake said. So, if you need your existing cluster to support that, you could modify it manually. And this operation doesn't affect the KubeSphere.