kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.36k stars 549 forks source link

Audit 自定义使用webhook方式的问题 #1347

Open LuckyT0mat0 opened 2 years ago

LuckyT0mat0 commented 2 years ago

What is version of KubeKey has the issue?

2.2.0

What is your os environment?

Ubuntu 20.04

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: luckyk8smaster, address: 192.168.3.187, internalAddress: 192.168.3.187, user: root, password: "Asd123qwe@@@"}
  roleGroups:
    etcd:
    - luckyk8smaster
    control-plane: 
    - luckyk8smaster
    worker:
    - luckyk8smaster
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: lb.luckytomato.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.20.10
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.50.64.0/18
    kubeServiceCIDR: 10.50.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: true
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []

A clear and concise description of what happend.

装是装成功了,但是我尝试开启audit webhook功能失败了,试了好久。 kube-apiserver.yaml:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.3.187:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.3.187
    - --allow-privileged=true
      #- --audit-webhook-initial-backoff=5
      #- --audit-webhook-mode=batch
      #- --audit-webhook-batch-buffer-size=5
    - --audit-policy-file=/etc/kubernetes/audit-policy.yaml
    - --audit-webhook-config-file=/etc/kubernetes/audit-webhook-kubeconfig
    - --authorization-mode=Node,RBAC
    - --bind-address=0.0.0.0
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem
    - --etcd-certfile=/etc/ssl/etcd/ssl/node-luckyk8smaster.pem
    - --etcd-keyfile=/etc/ssl/etcd/ssl/node-luckyk8smaster-key.pem
    - --etcd-servers=https://192.168.3.187:2379
    - --feature-gates=RotateKubeletServerCertificate=true,TTLAfterFinished=true,ExpandCSIVolumes=true
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.50.0.0/18
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: kubesphere/kube-apiserver:v1.20.10
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 192.168.3.187
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-apiserver
    readinessProbe:
      failureThreshold: 3
      httpGet:
        host: 192.168.3.187
        path: /readyz
        port: 6443
        scheme: HTTPS
      periodSeconds: 1
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 250m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 192.168.3.187
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /etc/ssl/etcd/ssl
      name: etcd-certs-0
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
  - hostPath:
      path: /etc/ssl/etcd/ssl
      type: DirectoryOrCreate
    name: etcd-certs-0
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}

audit-policy.yaml:

# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata

audit-webhook-kubeconfig:

apiVersion: v1
kind: Config
clusters:
- name: falco
  cluster:
    server: http://10.68.109.189:8765/k8s-audit
contexts:
- context:
    cluster: falco
    user: ""
  name: default-context
current-context: default-context
preferences: {}
users: []

修改为上述的配置后,我尝试使用sudo systemctl daemon-reload && systemctl restart kubelet.service,然后就疯狂报错

root@luckyk8smaster:~# kubectl get all -n kube-system
The connection to the server lb.luckytomato.local:6443 was refused - did you specify the right host or port?

Relevant log output

root@luckyk8smaster:~# systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Mon 2022-06-20 13:06:51 UTC; 13min ago
       Docs: http://kubernetes.io/docs/
   Main PID: 12598 (kubelet)
      Tasks: 18 (limit: 9830)
     Memory: 54.4M
     CGroup: /system.slice/kubelet.service
             └─12598 /usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=syste>

Jun 20 13:20:05 luckyk8smaster kubelet[12598]: E0620 13:20:05.051554   12598 kubelet.go:2263] node "luckyk8smaster" not found
Jun 20 13:20:05 luckyk8smaster kubelet[12598]: E0620 13:20:05.151652   12598 kubelet.go:2263] node "luckyk8smaster" not found
Jun 20 13:20:05 luckyk8smaster kubelet[12598]: E0620 13:20:05.251840   12598 kubelet.go:2263] node "luckyk8smaster" not found
Jun 20 13:20:05 luckyk8smaster kubelet[12598]: E0620 13:20:05.352049   12598 kubelet.go:2263] node "luckyk8smaster" not found
Jun 20 13:20:05 luckyk8smaster kubelet[12598]: I0620 13:20:05.370604   12598 scope.go:111] [topologymanager] RemoveContainer - Container ID: a157f709c052ccc78f188b480974a7009703e18310d209d910a72971d363bdde
Jun 20 13:20:05 luckyk8smaster kubelet[12598]: E0620 13:20:05.371499   12598 pod_workers.go:191] Error syncing pod 3f26ae69fb116f334ef4de5b9a5b849b ("kube-apiserver-luckyk8smaster_kube-system(3f26ae69fb116f3>
Jun 20 13:20:05 luckyk8smaster kubelet[12598]: E0620 13:20:05.452145   12598 kubelet.go:2263] node "luckyk8smaster" not found
Jun 20 13:20:05 luckyk8smaster kubelet[12598]: E0620 13:20:05.552286   12598 kubelet.go:2263] node "luckyk8smaster" not found
Jun 20 13:20:05 luckyk8smaster kubelet[12598]: E0620 13:20:05.652478   12598 kubelet.go:2263] node "luckyk8smaster" not found
Jun 20 13:20:05 luckyk8smaster kubelet[12598]: E0620 13:20:05.752680   12598 kubelet.go:2263] node "luckyk8smaster" not found

Additional information

暂无

24sama commented 2 years ago

Hi @LuckyT0mat0 Can you paste some logs of kube-apisever? You can use docker logs or cat /var/log/pods/xxxxx.

xiaods commented 2 years ago

@LuckyT0mat0 need more logs on your case.