kubesphere / ks-installer

Install KubeSphere on existing Kubernetes cluster
https://kubesphere.io
Apache License 2.0
532 stars 746 forks source link

audit repeat installation error #1808

Closed chj9 closed 3 years ago

chj9 commented 3 years ago

Describe the Bug

audit repeat installation error,Every configuration is default

My KubeSphere is upgraded from 3.1.1 to 3.2.0, The log function is normally used in version 3.1.1

Versions Used

KubeSphere: 3.2.0 Kubernetes: 1.21.5

Environment

How many nodes and their hardware configuration: CentOS 7.9 / 3 masters: 8cpu/32g; 3 nodes: 16cpu64g

How To Reproduce

  1. delete audit operator
  2. Set the audit to false, wait for the restart operation to complete again and then set the audit to true
  3. kubectl edit cc -n kubesphere-system ks-installer.then delete status that you want to reinstall. image
  4. kubectl rollout restart deploy -n kubesphere-system ks-installer

fluentbit-operator Is operating normally image When I restarted the installation, the following error was reported

{
  "counter": 36,
  "created": "2021-11-10T06:12:24.065278",
  "end_line": 35,
  "event": "runner_on_failed",
  "event_data": {
    "duration": 9.306744,
    "end": "2021-11-10T06:12:24.065093",
    "event_loop": "items",
    "host": "localhost",
    "ignore_errors": null,
    "play": "localhost",
    "play_pattern": "localhost",
    "play_uuid": "4ec8d09c-8b94-254e-f6c6-000000000005",
    "playbook": "/kubesphere/playbooks/auditing.yaml",
    "playbook_uuid": "896120f9-f577-495a-ae54-7eed5be7b1eb",
    "remote_addr": "127.0.0.1",
    "res": {
      "changed": false,
      "msg": "All items completed",
      "results": [
        {
          "_ansible_item_label": {
            "file": "custom-output-elasticsearch-auditing.yaml",
            "name": "custom-output-elasticsearch-auditing"
          },
          "_ansible_no_log": false,
          "ansible_loop_var": "item",
          "changed": false,
          "checksum": "14c4f2e9d36e2880b7990443cddf63254da4c90d",
          "diff": [],
          "failed": true,
          "invocation": {
            "module_args": {
              "_original_basename": "custom-output-elasticsearch-auditing.yaml.j2",
              "attributes": null,
              "backup": false,
              "checksum": "14c4f2e9d36e2880b7990443cddf63254da4c90d",
              "content": null,
              "delimiter": null,
              "dest": "/kubesphere/kubesphere/fluentbit-operator/custom-output-elasticsearch-auditing.yaml",
              "directory_mode": null,
              "follow": false,
              "force": true,
              "group": null,
              "local_follow": null,
              "mode": null,
              "owner": null,
              "regexp": null,
              "remote_src": null,
              "selevel": null,
              "serole": null,
              "setype": null,
              "seuser": null,
              "src": "/home/kubesphere/.ansible/tmp/ansible-tmp-1636524735.0714512-6927-281084864357182/source",
              "unsafe_writes": null,
              "validate": null
            }
          },
          "item": {
            "file": "custom-output-elasticsearch-auditing.yaml",
            "name": "custom-output-elasticsearch-auditing"
          },
          "msg": "Destination directory /kubesphere/kubesphere/fluentbit-operator does not exist"
        },
        {
          "_ansible_item_label": {
            "file": "custom-input-auditing.yaml",
            "name": "custom-input-auditing"
          },
          "_ansible_no_log": false,
          "ansible_loop_var": "item",
          "changed": false,
          "checksum": "3abaf56aeec818acedaebf7e0d415e87582c361b",
          "diff": [],
          "failed": true,
          "invocation": {
            "module_args": {
              "_original_basename": "custom-input-auditing.yaml.j2",
              "attributes": null,
              "backup": false,
              "checksum": "3abaf56aeec818acedaebf7e0d415e87582c361b",
              "content": null,
              "delimiter": null,
              "dest": "/kubesphere/kubesphere/fluentbit-operator/custom-input-auditing.yaml",
              "directory_mode": null,
              "follow": false,
              "force": true,
              "group": null,
              "local_follow": null,
              "mode": null,
              "owner": null,
              "regexp": null,
              "remote_src": null,
              "selevel": null,
              "serole": null,
              "setype": null,
              "seuser": null,
              "src": "/home/kubesphere/.ansible/tmp/ansible-tmp-1636524738.077275-6927-189758065821052/source",
              "unsafe_writes": null,
              "validate": null
            }
          },
          "item": {
            "file": "custom-input-auditing.yaml",
            "name": "custom-input-auditing"
          },
          "msg": "Destination directory /kubesphere/kubesphere/fluentbit-operator does not exist"
        },
        {
          "_ansible_item_label": {
            "file": "custom-filter-auditing.yaml",
            "name": "custom-filter-auditing"
          },
          "_ansible_no_log": false,
          "ansible_loop_var": "item",
          "changed": false,
          "checksum": "26d91db03ee08ea136d795f8e871949b46ee5dcd",
          "diff": [],
          "failed": true,
          "invocation": {
            "module_args": {
              "_original_basename": "custom-filter-auditing.yaml.j2",
              "attributes": null,
              "backup": false,
              "checksum": "26d91db03ee08ea136d795f8e871949b46ee5dcd",
              "content": null,
              "delimiter": null,
              "dest": "/kubesphere/kubesphere/fluentbit-operator/custom-filter-auditing.yaml",
              "directory_mode": null,
              "follow": false,
              "force": true,
              "group": null,
              "local_follow": null,
              "mode": null,
              "owner": null,
              "regexp": null,
              "remote_src": null,
              "selevel": null,
              "serole": null,
              "setype": null,
              "seuser": null,
              "src": "/home/kubesphere/.ansible/tmp/ansible-tmp-1636524741.0712361-6927-128821641657607/source",
              "unsafe_writes": null,
              "validate": null
            }
          },
          "item": {
            "file": "custom-filter-auditing.yaml",
            "name": "custom-filter-auditing"
          },
          "msg": "Destination directory /kubesphere/kubesphere/fluentbit-operator does not exist"
        }
      ]
    },
    "role": "ks-auditing",
    "start": "2021-11-10T06:12:14.758349",
    "task": "ks-auditing | Creating manifests",
    "task_action": "template",
    "task_args": "",
    "task_path": "/kubesphere/installer/roles/ks-auditing/tasks/fluentbit-operator.yaml:1",
    "task_uuid": "4ec8d09c-8b94-254e-f6c6-00000000001d",
    "uuid": "e7a02aa1-51e3-4204-81a3-4dd7d969b9c3"
  },
  "parent_uuid": "4ec8d09c-8b94-254e-f6c6-00000000001d",
  "pid": 6370,
  "runner_ident": "auditing",
  "start_line": 35,
  "stdout": "",
  "uuid": "e7a02aa1-51e3-4204-81a3-4dd7d969b9c3"
}

This is my ks-installer config

apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  labels:
    version: v3.2.0
  name: ks-installer
  namespace: kubesphere-system
spec:
  alerting:
    enabled: true
  auditing:
    enabled: true
  authentication:
    jwtSecret: ''
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
    es:
      basicAuth:
        enabled: false
        password: ''
        username: ''
      data:
        volumeSize: 20Gi
      elkPrefix: k8s
      externalElasticsearchPort: '9200'
      externalElasticsearchUrl: atai-testing-eck-cluster-es-http.elastic-system
      logMaxAge: 3
      master:
        volumeSize: 4Gi
    minio:
      volumeSize: 10Gi
    monitoring:
      endpoint: 'http://prometheus-operated.kubesphere-monitoring-system.svc:9090'
    openldap:
      enabled: false
      volumeSize: 2Gi
    redis:
      enabled: true
      volumeSize: 2Gi
  devops:
    enabled: false
    jenkinsJavaOpts_MaxRAM: 2g
    jenkinsJavaOpts_Xms: 512m
    jenkinsJavaOpts_Xmx: 512m
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
  etcd:
    endpointIps: localhost
    monitoring: false
    port: 2379
    tlsEnable: true
  events:
    enabled: false
    ruler:
      enabled: true
      replicas: 2
  kubeedge:
    cloudCore:
      cloudHub:
        advertiseAddress:
          - ''
        nodeLimit: '100'
      cloudhubHttpsPort: '10002'
      cloudhubPort: '10000'
      cloudhubQuicPort: '10001'
      cloudstreamPort: '10003'
      nodeSelector:
        node-role.kubernetes.io/worker: ''
      service:
        cloudhubHttpsNodePort: '30002'
        cloudhubNodePort: '30000'
        cloudhubQuicNodePort: '30001'
        cloudstreamNodePort: '30003'
        tunnelNodePort: '30004'
      tolerations: []
      tunnelPort: '10004'
    edgeWatcher:
      edgeWatcherAgent:
        nodeSelector:
          node-role.kubernetes.io/worker: ''
        tolerations: []
      nodeSelector:
        node-role.kubernetes.io/worker: ''
      tolerations: []
    enabled: false
  local_registry: ''
  logging:
    containerruntime: docker
    enabled: true
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:
    enabled: true
  monitoring:
    prometheusMemoryRequest: 400Mi
    prometheusVolumeSize: 30Gi
    storageClass: local-disk
  multicluster:
    clusterRole: none
  network:
    ippool:
      type: none
    networkpolicy:
      enabled: false
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  persistence:
    storageClass: local-disk
  servicemesh:
    enabled: true
wenchajun commented 3 years ago

Because auditing operator is installed by Helm.So you should do one more step helm uninstall kube-auditing -n kubesphere-logging-system Then kubectl edit cc -n kubesphere-system ks-installer kubectl rollout restart deploy -n kubesphere-system ks-installer

chj9 commented 3 years ago

Because auditing operator is installed by Helm.So you should do one more step helm uninstall kube-auditing -n kubesphere-logging-system Then kubectl edit cc -n kubesphere-system ks-installer kubectl rollout restart deploy -n kubesphere-system ks-installer

Useless,The problem is still My operation process is as follows

  1. helm uninstall kube-auditing -n kubesphere-logging-system image
  2. kubectl edit cc -n kubesphere-system ks-installer set auditing enabled to false
  3. kubectl rollout restart deploy -n kubesphere-system ks-installer
  4. Wait for the previous step to complete ,kubectl edit cc -n kubesphere-system ks-installer set auditing enabled to true
  5. kubectl rollout restart deploy -n kubesphere-system ks-installer Reported the same error
wenchajun commented 3 years ago

Because auditing operator is installed by Helm.So you should do one more step helm uninstall kube-auditing -n kubesphere-logging-system Then kubectl edit cc -n kubesphere-system ks-installer kubectl rollout restart deploy -n kubesphere-system ks-installer

Useless,The problem is still My operation process is as follows

  1. helm uninstall kube-auditing -n kubesphere-logging-system image
  2. kubectl edit cc -n kubesphere-system ks-installer set auditing enabled to false
  3. kubectl rollout restart deploy -n kubesphere-system ks-installer
  4. Wait for the previous step to complete ,kubectl edit cc -n kubesphere-system ks-installer set auditing enabled to true
  5. kubectl rollout restart deploy -n kubesphere-system ks-installer Reported the same error

You don't need to set auditing to false.You just need to do these steps helm uninstall kube-auditing -n kubesphere-logging-system Then kubectl edit cc -n kubesphere-system ks-installer delete the status.auditing kubectl rollout restart deploy -n kubesphere-system ks-installer

chj9 commented 3 years ago

Because auditing operator is installed by Helm.So you should do one more step helm uninstall kube-auditing -n kubesphere-logging-system Then kubectl edit cc -n kubesphere-system ks-installer kubectl rollout restart deploy -n kubesphere-system ks-installer

Useless,The problem is still My operation process is as follows

  1. helm uninstall kube-auditing -n kubesphere-logging-system image
  2. kubectl edit cc -n kubesphere-system ks-installer set auditing enabled to false
  3. kubectl rollout restart deploy -n kubesphere-system ks-installer
  4. Wait for the previous step to complete ,kubectl edit cc -n kubesphere-system ks-installer set auditing enabled to true
  5. kubectl rollout restart deploy -n kubesphere-system ks-installer Reported the same error

You don't need to set auditing to false.You just need to do these steps helm uninstall kube-auditing -n kubesphere-logging-system Then kubectl edit cc -n kubesphere-system ks-installer delete the status.auditing kubectl rollout restart deploy -n kubesphere-system ks-installer

I have performed the operation you mentioned in the issues and deleted the status, but it still failed. I think you can reproduce this error scene once

wenchajun commented 3 years ago

F35659C04KMM PCW2AHJ{9S I think you should delete this field

chj9 commented 3 years ago

F35659C04KMM PCW2AHJ{9S I think you should delete this field

image Yes delete here