Closed gabemontero closed 6 months ago
I moved the configuration to the TektonConfig and fixed the triggers logs as well https://github.com/openshift-pipelines/pipeline-service/pull/972 . My PR was ready few hours ago, but clusterbot gave me troubles so it took a while to verify. BTW without the configuration in the TektonConfig, the triggers configmap and logs had the wrong keys ("message" vs "msg"). For pipelines, the configmap was coming good, but the logs had the wrong key until I deleted the pods and they were recreated.
I tried again what you did and got different and expected IMO result:
-> oc describe cm config-logging
Name: config-logging
Namespace: openshift-pipelines
Labels: app.kubernetes.io/component=resolvers
app.kubernetes.io/instance=default
app.kubernetes.io/part-of=tekton-pipelines
operator.tekton.dev/operand-name=tektoncd-pipelines
Annotations: operator.tekton.dev/last-applied-hash: 32c5bf90cabcd64ea3ce3d8c3f1bab72105b4b913d9537467858c604d7e716df
Data
====
loglevel.webhook:
----
info
zap-logger-config:
----
{
"level": "info",
"development": false,
"sampling": {
"initial": 100,
"thereafter": 100
},
"outputPaths": ["stdout"],
"errorOutputPaths": ["stderr"],
"encoding": "json",
"encoderConfig": {
"timeKey": "timestamp",
"levelKey": "severity",
"nameKey": "logger",
"callerKey": "caller",
"messageKey": "message",
"stacktraceKey": "stacktrace",
"lineEnding": "",
"levelEncoder": "",
"timeEncoder": "iso8601",
"durationEncoder": "",
"callerEncoder": ""
}
}
loglevel.controller:
----
info
BinaryData
====
Events: <none>
same core upstream stuff, minus the fact you used describe vs. oc get yam
whether we use your PR or mine comes down to the question I asked in your PR
My point is this PR does not work. See the values in the configmap I posted are not the values we want. For example, the value of the "messageKey" I got while testing this PR was "message", while the value we want is "msg". The same goes for "timeKey". Also checking only the configmap is not enough. I have encountered issues where the configmap has the right values, but they are not used by the pods until pod restart. Lastly, my PR also fixes the logs for the triggers pods, which was missed originally.
My point is this PR does not work. See the values in the configmap I posted are not the values we want. For example, the value of the "messageKey" I got while testing this PR was "message", while the value we want is "msg". The same goes for "timeKey". Also checking only the configmap is not enough. I have encountered issues where the configmap has the right values, but they are not used by the pods until pod restart. Lastly, my PR also fixes the logs for the triggers pods, which was missed originally.
closing this PR
After running dev_setup.sh with this branch, here is the config map in question:
which matches upstream, and matches the zap ADR-6 changes that @ramessesii2 originally did with https://github.com/openshift-pipelines/pipeline-service/pull/634
The should fix the gitops race condition we are seeing in stage and now prod where the operator is in a battle to reconcile this config map with what we were trying to set here in this repo.
rh-pre-commit.version: 2.2.0 rh-pre-commit.check-secrets: ENABLED