Closed mikekamornikov closed 3 years ago
I thought I managed to make it work as expected after these changes:
logging:
valueFrom:
configMapKeyRef:
key: log4j.properties
name: custom-connect-log4j
apiVersion: v1
data:
log4j.properties: |
name = KCConfig
monitorInterval = 30
connect.root.logger.level=WARN
log4j.rootLogger=WARN, CONSOLE
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=net.logstash.log4j.JSONEventLayoutV1
log4j.appender.CONSOLE.layout.includedFields=location
log4j.logger.com.spredfast.kafka.connect.s3.source.S3FilesReader=ERROR, CONSOLE
log4j.logger.io.debezium.connector.mysql.MySqlSchema=ERROR, CONSOLE
log4j.logger.io.debezium.connector.mysql.MySqlValueConverters=ERROR, CONSOLE
log4j.logger.org.reflections=ERROR, CONSOLE
log4j.logger.org.I0Itec.zkclient=ERROR, CONSOLE
log4j.logger.org.apache.zookeeper=ERROR, CONSOLE
kind: ConfigMap
metadata:
name: custom-connect-log4j
I bet it's the addition of appender to every logger (, CONSOLE
). Maybe the name of appender is also important.
UPDATE: I was wrong, it's not fixed.
The Connect logs are updated dynamically through the Connect APIs. Maybe @sknot-rh can tell you if that is related to this or not.
I've just noticed such entries in the log:
{
"@timestamp": "2021-05-13T16:45:05.791Z",
"source_host": "***",
"file": "SchemaNameAdjuster.java",
"method": "lambda$create$1",
"level": "WARN",
"line_number": "172",
"thread_name": "***",
"@version": 1,
"logger_name": "io.debezium.util.SchemaNameAdjuster",
"message": "***'",
"class": "io.debezium.util.SchemaNameAdjuster",
"mdc": {
"dbz.connectorContext": "binlog",
"dbz.connectorType": "MySQL",
"connector.context": "[***] ",
"dbz.connectorName": "***"
}
}
While we have the following in the config:
log4j.logger.io.debezium.util.SchemaNameAdjuster=ERROR, CONSOLE
curl -sS localhost:8083/admin/loggers | jq
...
"io.debezium.util.SchemaNameAdjuster": {
"level": "ERROR"
},
...
Hi. The API response says it is set correctly. Are sure the WARN message is printed after the change to the ERROR level has been made?
@sknot-rh I haven't changed anything since my last comment and got a bunch of these ones:
{
"@timestamp": "2021-05-17T05:32:19.468Z",
"source_host": "***",
"file": "SchemaNameAdjuster.java",
"method": "lambda$create$1",
"level": "WARN",
"line_number": "172",
"thread_name": "***",
"@version": 1,
"logger_name": "io.debezium.util.SchemaNameAdjuster",
"message": "The Kafka Connect schema name '***' is not a valid Avro schema name, so replacing with '***'",
"class": "io.debezium.util.SchemaNameAdjuster",
"mdc": {
"dbz.connectorContext": "binlog",
"dbz.connectorType": "MySQL",
"connector.context": "[***|task-0] ",
"dbz.connectorName": "***"
}
}
Verified loggers again, just in case:
curl -sS localhost:8083/admin/loggers | jq
...
"io.debezium.util.SchemaNameAdjuster": {
"level": "ERROR"
},
...
I described ConfigMap
mounted as kafka-metrics-and-logging
volume two KafkaConnect
pod:
log4j.properties:
----
name = KCConfig
monitorInterval = 30
connect.root.logger.level=WARN
log4j.rootLogger=WARN, CONSOLE
# console appender with json layout
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=net.logstash.log4j.JSONEventLayoutV1
log4j.appender.CONSOLE.layout.includedFields=location
# debezium mysql connector
log4j.logger.io.debezium.connector.mysql.MySqlSchema=ERROR, CONSOLE
log4j.logger.io.debezium.connector.mysql.MySqlValueConverters=ERROR, CONSOLE
# debezium utils
log4j.logger.io.debezium.util.SchemaNameAdjuster=ERROR, CONSOLE
...
Logging in KafkaConnect
resource:
"logging": {
"type": "external",
"valueFrom": {
"configMapKeyRef": {
"key": "log4j.properties",
"name": "cxp-connect-log4j"
}
}
},
Finally I described original logging ConfigMap
^^^:
log4j.properties:
----
name = KCConfig
monitorInterval = 30
connect.root.logger.level=WARN
log4j.rootLogger=WARN, CONSOLE
# console appender with json layout
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=net.logstash.log4j.JSONEventLayoutV1
log4j.appender.CONSOLE.layout.includedFields=location
# debezium mysql connector
log4j.logger.io.debezium.connector.mysql.MySqlSchema=ERROR, CONSOLE
log4j.logger.io.debezium.connector.mysql.MySqlValueConverters=ERROR, CONSOLE
# debezium utils
log4j.logger.io.debezium.util.SchemaNameAdjuster=ERROR, CONSOLE
...
That is weird... The logger level from the REST API says it is set correctly. So Cluster Operator works as expected. Could you get all the loggers from connect (to see whether io.debezium.util.SchemaNameAdjuster
is not overriden by some higher level logger).
Here is it:
Hm... that is strange. Did you try to roll the connect pod manually? I know that is not a solution, but at least we would know whether it helps.
What do you want to check this way? Strimzi version of Kafka Connect image without operator? I can try that ...
Whether after the pod restart the correct logger level is used and no WARN records for io.debezium.util.SchemaNameAdjuster
will be printed.
@sknot-rh It took some time but here's what I've got. I've deleted the pod 2d ago to let deployment recreate it (see 40h
)
Every 2.0s: kubectl get pods mkamornikov: Fri May 21 15:33:33 2021
NAME READY STATUS RESTARTS AGE
***-connect-644bbfddbc-whffw 1/1 Running 0 40h
After that connector (debezium mysql) was paused and after 1d restarted:
May 20, 2021 11:37:31 PM com.github.shyiko.mysql.binlog.BinaryLogClient connect
INFO: Connected to ***.us-west-2.rds.amazonaws.com:3306 at mysql-bin-changelog.482880/134782545 (sid:6401, cid:59093907)
I still see the warnings from logger which is supposed to log just errors:
{
"@timestamp": "2021-05-21T02:31:19.913Z",
"source_host": "***-connect-644bbfddbc-whffw",
"file": "SchemaNameAdjuster.java",
"method": "lambda$create$1",
"level": "WARN",
"line_number": "168",
"thread_name": "blc-***.us-west-2.rds.amazonaws.com:3306",
"@version": 1,
"logger_name": "io.debezium.util.SchemaNameAdjuster",
"message": "The Kafka Connect schema name '***.Envelope' is not a valid Avro schema name, so replacing with '***.Envelope'",
"class": "io.debezium.util.SchemaNameAdjuster",
"mdc": {
"dbz.connectorContext": "binlog",
"dbz.connectorType": "MySQL",
"connector.context": "[***-cdc-source|task-0] ",
"dbz.connectorName": "***"
}
}
Logger settings are the same and api returns:
"io.debezium.util.SchemaNameAdjuster": {
"level": "ERROR"
},
Another user was experiencing a similar issue. The fix should be done in https://github.com/strimzi/strimzi-kafka-operator/pull/5145
Closing as it should be fixed in #5145
Describe the bug We've been using external logging configuration for a while. But after operator upgrade (probably started earlier than
v0.22.0
) custom settings are not applied anymore:For those
ERROR
loggers we getWARN
logs. And that silent fallback to some default I consider a bug.To Reproduce Steps to reproduce the behavior:
ERROR
instead ofWARN
)KafkaConnect
cluster with new logging settings fromConfigMap
.Expected behavior I expect custom logging config to be applied as given. If it's not possible I expect an error explaining why it's not possible. If the way we configure logging is wrong then we need a more complex example in the docs describing the right approach.
Environment:
Additional context If I add the following env variable to the container:
it’s visible that the app sees the configuration and reads it without any errors:
But it’s a mystery why these messages are being logged despite the correct configuration.