turbonomic / kubeturbo

140 stars 75 forks source link

[TRB-42587]:Support dynamic logging level configuration #900

Closed kevinwangcn closed 1 year ago

kevinwangcn commented 1 year ago

Intent

To make Kubeturbo support dynamic logging level configuration

Background

https://jsw.ibm.com/browse/TRB-42580

Testing

Case 1: Override the logging level value defined in the command line of the Kubeturbo deployment

1.Set the verbosity to 6 in the Kubeturbo deployment

Name:                   kubeturbo-kw-114
Namespace:              default
CreationTimestamp:      Mon, 05 Dec 2022 14:44:51 -0500
Labels:                 app=kubeturbo-kw-114
Annotations:            deployment.kubernetes.io/revision: 62
Selector:               app=kubeturbo-kw-114
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app=kubeturbo-kw-114
  Annotations:      kubeturbo.io/monitored: false
  Service Account:  turbo-user
  Containers:
   kubeturbo-kw-114-cnt:
    Image:      kevin0204/kubeturbo:loadcm
    Port:       <none>
    Host Port:  <none>
    Args:
      --turboconfig=/etc/kubeturbo/turbo.config
      --v=6 <------------change here
      --kubelet-https=true
      --kubelet-port=10250

2. Set the logging level to 2 in the configmap of Kubeturbo

[root@api.ocp410kev.cp.fyre.ibm.com ~]# k describe cm turbo-config-kw-114 
Name:         turbo-config-kw-114
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
turbo-autoreload.config:
----
{
    "logging": {
       "level": 2 <------here
    }
}
turbo.config:
----

3. Start the Kubeturbo, and expect to see the logging level set to 2

[root@api.ocp410kev.cp.fyre.ibm.com ~]# k logs -f kubeturbo-kw-114-6f575b7ffb-zggkm |grep -i logging
I0710 14:59:51.679557       1 kubeturbo_builder.go:1045] Logging level is changed from 6 to 2

Case 2: Change the logging level in the configmap and check if the new verbosity is applied in the Kubetrubo without a pod restart

1. Edit the configmap of the Kubeturbo and change the logging level from 2 to 5

[root@api.ocp410kev.cp.fyre.ibm.com ~]# k describe cm turbo-config-kw-114 
Name:         turbo-config-kw-114
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
turbo-autoreload.config:
----
{
    "logging": {
       "level": 5
    }
}

2. Check the Kubeturbo and search its logs with the keyword logging

[root@api.ocp410kev.cp.fyre.ibm.com ~]# k logs -f kubeturbo-kw-114-6f575b7ffb-zggkm |grep -i logging
I0710 14:59:51.679557       1 kubeturbo_builder.go:1045] Logging level is changed from 6 to 2
I0710 15:11:05.737864       1 kubeturbo_builder.go:1045] Logging level is changed from 2 to 5 <----here

3. Do a discovery and check if there are similar output in the logs, these logs has the verbosity as 5

[root@api.ocp410kev.cp.fyre.ibm.com ~]# k logs -f kubeturbo-kw-114-6f575b7ffb-zggkm |grep -i "Adding"
I0710 15:17:28.052107       1 node_entity_dto_builder.go:253] Adding label commodity for Node worker2.ocp410kev.cp.fyre.ibm.com with key : kubernetes.io/hostname=worker2.ocp410kev.cp.fyre.ibm.com
I0710 15:17:28.052118       1 node_entity_dto_builder.go:253] Adding label commodity for Node worker2.ocp410kev.cp.fyre.ibm.com with key : kubernetes.io/os=linux
I0710 15:17:28.052122       1 node_entity_dto_builder.go:253] Adding label commodity for Node worker2.ocp410kev.cp.fyre.ibm.com with key : node-role.kubernetes.io/worker=
...

Case 3: Deploy Kubeturbo through helm and check the configmap

1.Deply the Kubeturbo through helm chart in the test1 namespace and specify the logging level with 6

helm install --debug kubeturbo-test-5 ./kubeturbo  --set serverMeta.turboServer=http://9.46.1.1 --set image.tag=loadcm --set image.repository=kevin0204/kubeturbo --set logging.level=6 --set targetConfig.targetName=helm-test-5  --set roleName=turbo-cluster-admin --set roleBinding=helm-test5 --namespace test1

2.Check the pod status and the configmap

[kevinw@Kevins-MacBook-Pro.local deploy]$ k get pods -n test1
NAME                               READY   STATUS    RESTARTS   AGE
kubeturbo-test-5-fbfcf8c67-7rg24   1/1     Running   0          8m18s
[kevinw@Kevins-MacBook-Pro.local deploy]$ k get cm turbo-config-kubeturbo-test-5 -n test1 -oyaml
apiVersion: v1
data:
  turbo-autoreload.config: |-
    {
      "logging": {
        "level": 6  <-------here
      }
    }
  turbo.config: |-
    {
      "communicationConfig": {
        "serverMeta": {
          "version": "8.0",
          "turboServer": "http://9.46.1.1"
        },
        "restAPIConfig": {
          "opsManagerUserName": "Turbo_username",
          "opsManagerPassword": "Turbo_password"
        },
        "sdkProtocolConfig": {
           "registrationTimeoutSec": 300,
           "restartOnRegistrationTimeout": false
        }
      },
      "HANodeConfig": {
        "nodeRoles": ["master"]
      },
      "targetConfig": {
        "targetName": "helm-test-5"
      }
    }
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: kubeturbo-test-5
    meta.helm.sh/release-namespace: test1
  creationTimestamp: "2023-07-17T19:36:00Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  name: turbo-config-kubeturbo-test-5
  namespace: test1
  resourceVersion: "119698893"
  uid: 23c0f204-55f0-4f33-9156-e95d4fdb3d26

Checklist

These are the items that must be done by the developer and by reviewers before the change is ready to merge. Please strikeout any items that are not applicable, but don't delete them

Audience

(@ mention any review/... groups or people that should be aware of this merge request)

tian-ma commented 1 year ago

It is strange that we need an extra file called dynamic to do dynamic config. This is redundant. I understand we already debated this but it still looks weird