Case 1: Override the logging level value defined in the command line of the Kubeturbo deployment
1.Set the verbosity to 6 in the Kubeturbo deployment
Name: kubeturbo-kw-114
Namespace: default
CreationTimestamp: Mon, 05 Dec 2022 14:44:51 -0500
Labels: app=kubeturbo-kw-114
Annotations: deployment.kubernetes.io/revision: 62
Selector: app=kubeturbo-kw-114
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=kubeturbo-kw-114
Annotations: kubeturbo.io/monitored: false
Service Account: turbo-user
Containers:
kubeturbo-kw-114-cnt:
Image: kevin0204/kubeturbo:loadcm
Port: <none>
Host Port: <none>
Args:
--turboconfig=/etc/kubeturbo/turbo.config
--v=6 <------------change here
--kubelet-https=true
--kubelet-port=10250
2. Set the logging level to 2 in the configmap of Kubeturbo
[root@api.ocp410kev.cp.fyre.ibm.com ~]# k describe cm turbo-config-kw-114
Name: turbo-config-kw-114
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
turbo-autoreload.config:
----
{
"logging": {
"level": 2 <------here
}
}
turbo.config:
----
3. Start the Kubeturbo, and expect to see the logging level set to 2
[root@api.ocp410kev.cp.fyre.ibm.com ~]# k logs -f kubeturbo-kw-114-6f575b7ffb-zggkm |grep -i logging
I0710 14:59:51.679557 1 kubeturbo_builder.go:1045] Logging level is changed from 6 to 2
Case 2: Change the logging level in the configmap and check if the new verbosity is applied in the Kubetrubo without a pod restart
1. Edit the configmap of the Kubeturbo and change the logging level from 2 to 5
[root@api.ocp410kev.cp.fyre.ibm.com ~]# k describe cm turbo-config-kw-114
Name: turbo-config-kw-114
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
turbo-autoreload.config:
----
{
"logging": {
"level": 5
}
}
2. Check the Kubeturbo and search its logs with the keyword logging
[root@api.ocp410kev.cp.fyre.ibm.com ~]# k logs -f kubeturbo-kw-114-6f575b7ffb-zggkm |grep -i logging
I0710 14:59:51.679557 1 kubeturbo_builder.go:1045] Logging level is changed from 6 to 2
I0710 15:11:05.737864 1 kubeturbo_builder.go:1045] Logging level is changed from 2 to 5 <----here
3. Do a discovery and check if there are similar output in the logs, these logs has the verbosity as 5
[root@api.ocp410kev.cp.fyre.ibm.com ~]# k logs -f kubeturbo-kw-114-6f575b7ffb-zggkm |grep -i "Adding"
I0710 15:17:28.052107 1 node_entity_dto_builder.go:253] Adding label commodity for Node worker2.ocp410kev.cp.fyre.ibm.com with key : kubernetes.io/hostname=worker2.ocp410kev.cp.fyre.ibm.com
I0710 15:17:28.052118 1 node_entity_dto_builder.go:253] Adding label commodity for Node worker2.ocp410kev.cp.fyre.ibm.com with key : kubernetes.io/os=linux
I0710 15:17:28.052122 1 node_entity_dto_builder.go:253] Adding label commodity for Node worker2.ocp410kev.cp.fyre.ibm.com with key : node-role.kubernetes.io/worker=
...
Case 3: Deploy Kubeturbo through helm and check the configmap
1.Deply the Kubeturbo through helm chart in the test1 namespace and specify the logging level with 6
These are the items that must be done by the developer and by reviewers before the change is ready to merge. Please strikeout any items that are not applicable, but don't delete them
[ ] Developer Checks
[x] Full build with unit tests and fmt and vet checks
- [ ] Unit tests added / updated
[x] No unlicensed images, no third-party code (such as from StackOverflow)
- [ ] Integration tests added / updated
[x] Manual testing done (and described)
- [ ] Product sweep run and passed- [ ] Developer wiki updated (and linked to this description)<---there is a separate task for this
[ ] Reviewer Checks
[ ] Merge request description clear and understandable
[ ] Developer checklist items complete
[ ] Functional code review (how is the code written)
[ ] Architectural review (does the code try to do the right thing, in the right way)
It is strange that we need an extra file called dynamic to do dynamic config.
This is redundant. I understand we already debated this but it still looks weird
Intent
To make Kubeturbo support dynamic logging level configuration
Background
https://jsw.ibm.com/browse/TRB-42580
Testing
Case 1: Override the logging level value defined in the command line of the Kubeturbo deployment
1.Set the verbosity to 6 in the Kubeturbo deployment
2. Set the logging level to 2 in the configmap of Kubeturbo
3. Start the Kubeturbo, and expect to see the logging level set to 2
Case 2: Change the logging level in the configmap and check if the new verbosity is applied in the Kubetrubo without a pod restart
1. Edit the configmap of the Kubeturbo and change the logging level from 2 to 5
2. Check the Kubeturbo and search its logs with the keyword
logging
3. Do a discovery and check if there are similar output in the logs, these logs has the verbosity as
5
Case 3: Deploy Kubeturbo through helm and check the configmap
1.Deply the Kubeturbo through helm chart in the
test1
namespace and specify the logging level with 62.Check the pod status and the configmap
Checklist
These are the items that must be done by the developer and by reviewers before the change is ready to merge. Please
strikeoutany items that are not applicable, but don't delete them- [ ] Unit tests added / updated- [ ] Integration tests added / updated- [ ] Product sweep run and passed- [ ] Developer wiki updated (and linked to this description)<---there is a separate task for thisAudience
(@ mention any
review/...
groups or people that should be aware of this merge request)