strimzi / strimzi-kafka-operator

Apache Kafka® running on Kubernetes
https://strimzi.io/
Apache License 2.0
4.84k stars 1.29k forks source link

Custom KC logging configuration is not applied #4965

Closed mikekamornikov closed 3 years ago

mikekamornikov commented 3 years ago

Describe the bug We've been using external logging configuration for a while. But after operator upgrade (probably started earlier than v0.22.0) custom settings are not applied anymore:

logging:
     name: custom-connect-log4j
     type: external
apiVersion: v1
 data:
   log4j.properties: |
     log4j.rootLogger = WARN, stdout
     log4j.appender.stdout=org.apache.log4j.ConsoleAppender
     log4j.appender.stdout.layout=net.logstash.log4j.JSONEventLayoutV1
     log4j.appender.stdout.layout.includedFields=location
     log4j.logger.com.spredfast.kafka.connect.s3.source.S3FilesReader=ERROR
     log4j.logger.io.debezium.connector.mysql.MySqlSchema=ERROR
     log4j.logger.io.debezium.connector.mysql.MySqlValueConverters=ERROR
     log4j.logger.org.apache.zookeeper=ERROR
     log4j.logger.org.I0Itec.zkclient=ERROR
     log4j.logger.org.reflections=ERROR
kind: ConfigMap
 metadata:
  name: custom-connect-log4j

For those ERROR loggers we get WARN logs. And that silent fallback to some default I consider a bug.

To Reproduce Steps to reproduce the behavior:

  1. Create custom logging configuration which decreases verbosity of what you usually get (for example make some loggers report ERROR instead of WARN)
  2. Redeploy KafkaConnect cluster with new logging settings from ConfigMap.
  3. Observe logs in the runtime

Expected behavior I expect custom logging config to be applied as given. If it's not possible I expect an error explaining why it's not possible. If the way we configure logging is wrong then we need a more complex example in the docs describing the right approach.

Environment:

Additional context If I add the following env variable to the container:

    connectContainer:
      env:
        - name: KAFKA_LOG4J_OPTS
          value: -Dlog4j.debug -Dlog4j.configuration=file:/opt/kafka/custom-config/log4j.properties

it’s visible that the app sees the configuration and reads it without any errors:

log4j: Using URL [file:/opt/kafka/custom-config/log4j.properties] for automatic log4j configuration.
log4j: Reading configuration from URL file:/opt/kafka/custom-config/log4j.properties
log4j: Parsing for [root] with value=[WARN, stdout].
log4j: Level token is [WARN].
log4j: Category root set to WARN
log4j: Parsing appender named "stdout".
log4j: Parsing layout options for "stdout".
log4j:WARN No such property [includedFields] in net.logstash.log4j.JSONEventLayoutV1.
log4j: End of parsing for "stdout".
log4j: Parsed "stdout" options.
log4j: Parsing for [org.I0Itec.zkclient] with value=[ERROR].
log4j: Level token is [ERROR].
log4j: Category org.I0Itec.zkclient set to ERROR
log4j: Handling log4j.additivity.org.I0Itec.zkclient=[null]
log4j: Parsing for [org.reflections] with value=[ERROR].
log4j: Level token is [ERROR].
log4j: Category org.reflections set to ERROR
log4j: Handling log4j.additivity.org.reflections=[null]
log4j: Parsing for [org.apache.zookeeper] with value=[ERROR].
log4j: Level token is [ERROR].
log4j: Category org.apache.zookeeper set to ERROR
log4j: Handling log4j.additivity.org.apache.zookeeper=[null]
log4j: Parsing for [com.spredfast.kafka.connect.s3.source.S3FilesReader] with value=[ERROR].
log4j: Level token is [ERROR].
log4j: Category com.spredfast.kafka.connect.s3.source.S3FilesReader set to ERROR
log4j: Handling log4j.additivity.com.spredfast.kafka.connect.s3.source.S3FilesReader=[null]
log4j: Parsing for [io.debezium.connector.mysql.MySqlValueConverters] with value=[ERROR].
log4j: Level token is [ERROR].
log4j: Category io.debezium.connector.mysql.MySqlValueConverters set to ERROR
log4j: Handling log4j.additivity.io.debezium.connector.mysql.MySqlValueConverters=[null]
log4j: Parsing for [io.debezium.connector.mysql.MySqlSchema] with value=[ERROR].
log4j: Level token is [ERROR].
log4j: Category io.debezium.connector.mysql.MySqlSchema set to ERROR
log4j: Handling log4j.additivity.io.debezium.connector.mysql.MySqlSchema=[null]
log4j: Finished configuring.

But it’s a mystery why these messages are being logged despite the correct configuration.

mikekamornikov commented 3 years ago

I thought I managed to make it work as expected after these changes:

  logging:
     valueFrom:
       configMapKeyRef:
         key: log4j.properties
         name: custom-connect-log4j
apiVersion: v1
 data:
   log4j.properties: |
     name = KCConfig
     monitorInterval = 30

     connect.root.logger.level=WARN
     log4j.rootLogger=WARN, CONSOLE

     log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
     log4j.appender.CONSOLE.layout=net.logstash.log4j.JSONEventLayoutV1
     log4j.appender.CONSOLE.layout.includedFields=location

     log4j.logger.com.spredfast.kafka.connect.s3.source.S3FilesReader=ERROR, CONSOLE
     log4j.logger.io.debezium.connector.mysql.MySqlSchema=ERROR, CONSOLE
     log4j.logger.io.debezium.connector.mysql.MySqlValueConverters=ERROR, CONSOLE

     log4j.logger.org.reflections=ERROR, CONSOLE
     log4j.logger.org.I0Itec.zkclient=ERROR, CONSOLE
     log4j.logger.org.apache.zookeeper=ERROR, CONSOLE
kind: ConfigMap
 metadata:
  name: custom-connect-log4j

I bet it's the addition of appender to every logger (, CONSOLE). Maybe the name of appender is also important.


UPDATE: I was wrong, it's not fixed.

scholzj commented 3 years ago

The Connect logs are updated dynamically through the Connect APIs. Maybe @sknot-rh can tell you if that is related to this or not.

mikekamornikov commented 3 years ago

I've just noticed such entries in the log:

{
  "@timestamp": "2021-05-13T16:45:05.791Z",
  "source_host": "***",
  "file": "SchemaNameAdjuster.java",
  "method": "lambda$create$1",
  "level": "WARN",
  "line_number": "172",
  "thread_name": "***",
  "@version": 1,
  "logger_name": "io.debezium.util.SchemaNameAdjuster",
  "message": "***'",
  "class": "io.debezium.util.SchemaNameAdjuster",
  "mdc": {
    "dbz.connectorContext": "binlog",
    "dbz.connectorType": "MySQL",
    "connector.context": "[***] ",
    "dbz.connectorName": "***"
  }
}

While we have the following in the config:

log4j.logger.io.debezium.util.SchemaNameAdjuster=ERROR, CONSOLE
mikekamornikov commented 3 years ago
curl -sS localhost:8083/admin/loggers | jq
...
  "io.debezium.util.SchemaNameAdjuster": {
    "level": "ERROR"
  },
...
sknot-rh commented 3 years ago

Hi. The API response says it is set correctly. Are sure the WARN message is printed after the change to the ERROR level has been made?

mikekamornikov commented 3 years ago

@sknot-rh I haven't changed anything since my last comment and got a bunch of these ones:

{
  "@timestamp": "2021-05-17T05:32:19.468Z",
  "source_host": "***",
  "file": "SchemaNameAdjuster.java",
  "method": "lambda$create$1",
  "level": "WARN",
  "line_number": "172",
  "thread_name": "***",
  "@version": 1,
  "logger_name": "io.debezium.util.SchemaNameAdjuster",
  "message": "The Kafka Connect schema name '***' is not a valid Avro schema name, so replacing with '***'",
  "class": "io.debezium.util.SchemaNameAdjuster",
  "mdc": {
    "dbz.connectorContext": "binlog",
    "dbz.connectorType": "MySQL",
    "connector.context": "[***|task-0] ",
    "dbz.connectorName": "***"
  }
}

Verified loggers again, just in case:

curl -sS localhost:8083/admin/loggers | jq
...
"io.debezium.util.SchemaNameAdjuster": {
  "level": "ERROR"
},
...

I described ConfigMap mounted as kafka-metrics-and-logging volume two KafkaConnect pod:

log4j.properties:
----
name = KCConfig
monitorInterval = 30

connect.root.logger.level=WARN
log4j.rootLogger=WARN, CONSOLE

# console appender with json layout
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=net.logstash.log4j.JSONEventLayoutV1
log4j.appender.CONSOLE.layout.includedFields=location

# debezium mysql connector
log4j.logger.io.debezium.connector.mysql.MySqlSchema=ERROR, CONSOLE
log4j.logger.io.debezium.connector.mysql.MySqlValueConverters=ERROR, CONSOLE

# debezium utils
log4j.logger.io.debezium.util.SchemaNameAdjuster=ERROR, CONSOLE
...

Logging in KafkaConnect resource:

    "logging": {
      "type": "external",
      "valueFrom": {
        "configMapKeyRef": {
          "key": "log4j.properties",
          "name": "cxp-connect-log4j"
        }
      }
    },

Finally I described original logging ConfigMap ^^^:

log4j.properties:
----
name = KCConfig
monitorInterval = 30

connect.root.logger.level=WARN
log4j.rootLogger=WARN, CONSOLE

# console appender with json layout
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=net.logstash.log4j.JSONEventLayoutV1
log4j.appender.CONSOLE.layout.includedFields=location

# debezium mysql connector
log4j.logger.io.debezium.connector.mysql.MySqlSchema=ERROR, CONSOLE
log4j.logger.io.debezium.connector.mysql.MySqlValueConverters=ERROR, CONSOLE

# debezium utils
log4j.logger.io.debezium.util.SchemaNameAdjuster=ERROR, CONSOLE
...
sknot-rh commented 3 years ago

That is weird... The logger level from the REST API says it is set correctly. So Cluster Operator works as expected. Could you get all the loggers from connect (to see whether io.debezium.util.SchemaNameAdjuster is not overriden by some higher level logger).

mikekamornikov commented 3 years ago

Here is it:

curl -sS localhost:8083/admin/loggers | jq ```json { "com.spredfast.kafka.connect.s3.sink.S3SinkTask": { "level": "WARN" }, "com.spredfast.kafka.connect.s3.source.S3FilesReader": { "level": "ERROR" }, "com.spredfast.shade.amazonaws.AmazonWebServiceClient": { "level": "WARN" }, "com.spredfast.shade.amazonaws.auth.AWSCredentialsProviderChain": { "level": "WARN" }, "com.spredfast.shade.amazonaws.auth.InstanceProfileCredentialsProvider": { "level": "WARN" }, "com.spredfast.shade.amazonaws.auth.profile.ProfilesConfigFile": { "level": "WARN" }, "com.spredfast.shade.amazonaws.auth.profile.internal.ProfilesConfigFileLoader": { "level": "WARN" }, "com.spredfast.shade.amazonaws.event.ProgressListenerChain": { "level": "WARN" }, "com.spredfast.shade.amazonaws.http.AmazonHttpClient": { "level": "WARN" }, "com.spredfast.shade.amazonaws.http.IdleConnectionReaper": { "level": "WARN" }, "com.spredfast.shade.amazonaws.http.RepeatableInputStreamRequestEntity": { "level": "WARN" }, "com.spredfast.shade.amazonaws.http.conn.ClientConnectionManagerFactory": { "level": "WARN" }, "com.spredfast.shade.amazonaws.http.conn.ClientConnectionRequestFactory": { "level": "WARN" }, "com.spredfast.shade.amazonaws.http.conn.ssl.SdkTLSSocketFactory": { "level": "WARN" }, "com.spredfast.shade.amazonaws.http.impl.client.SdkHttpClient": { "level": "WARN" }, "com.spredfast.shade.amazonaws.internal.DefaultServiceEndpointBuilder": { "level": "WARN" }, "com.spredfast.shade.amazonaws.internal.EC2MetadataClient": { "level": "WARN" }, "com.spredfast.shade.amazonaws.internal.ReleasableInputStream": { "level": "WARN" }, "com.spredfast.shade.amazonaws.internal.ResettableInputStream": { "level": "WARN" }, "com.spredfast.shade.amazonaws.internal.SdkIOUtils": { "level": "WARN" }, "com.spredfast.shade.amazonaws.internal.SdkSSLSocket": { "level": "WARN" }, "com.spredfast.shade.amazonaws.internal.config.InternalConfig": { "level": "WARN" }, "com.spredfast.shade.amazonaws.metrics.AwsSdkMetrics": { "level": "WARN" }, "com.spredfast.shade.amazonaws.request": { "level": "WARN" }, "com.spredfast.shade.amazonaws.requestId": { "level": "WARN" }, "com.spredfast.shade.amazonaws.services.s3.AmazonS3Client": { "level": "WARN" }, "com.spredfast.shade.amazonaws.services.s3.internal.Mimetypes": { "level": "WARN" }, "com.spredfast.shade.amazonaws.services.s3.internal.ObjectExpirationHeaderHandler": { "level": "WARN" }, "com.spredfast.shade.amazonaws.services.s3.internal.S3ErrorResponseHandler": { "level": "WARN" }, "com.spredfast.shade.amazonaws.services.s3.internal.S3MetadataResponseHandler": { "level": "WARN" }, "com.spredfast.shade.amazonaws.services.s3.internal.S3Signer": { "level": "WARN" }, "com.spredfast.shade.amazonaws.services.s3.internal.S3V4AuthErrorRetryStrategy": { "level": "WARN" }, "com.spredfast.shade.amazonaws.services.s3.internal.ServiceUtils": { "level": "WARN" }, "com.spredfast.shade.amazonaws.services.s3.model.transform.XmlResponsesSaxParser": { "level": "WARN" }, "com.spredfast.shade.amazonaws.services.s3.transfer.TransferManager": { "level": "WARN" }, "com.spredfast.shade.amazonaws.services.s3.transfer.TransferProgress": { "level": "WARN" }, "com.spredfast.shade.amazonaws.services.s3.transfer.internal.UploadCallable": { "level": "WARN" }, "com.spredfast.shade.amazonaws.util.IOUtils": { "level": "WARN" }, "com.spredfast.shade.amazonaws.util.VersionInfoUtils": { "level": "WARN" }, "com.spredfast.shade.apache.http.client.protocol.RequestAddCookies": { "level": "WARN" }, "com.spredfast.shade.apache.http.client.protocol.RequestAuthCache": { "level": "WARN" }, "com.spredfast.shade.apache.http.client.protocol.RequestClientConnControl": { "level": "WARN" }, "com.spredfast.shade.apache.http.client.protocol.RequestProxyAuthentication": { "level": "WARN" }, "com.spredfast.shade.apache.http.client.protocol.RequestTargetAuthentication": { "level": "WARN" }, "com.spredfast.shade.apache.http.client.protocol.ResponseProcessCookies": { "level": "WARN" }, "com.spredfast.shade.apache.http.conn.ssl.AllowAllHostnameVerifier": { "level": "WARN" }, "com.spredfast.shade.apache.http.conn.ssl.BrowserCompatHostnameVerifier": { "level": "WARN" }, "com.spredfast.shade.apache.http.conn.ssl.StrictHostnameVerifier": { "level": "WARN" }, "com.spredfast.shade.apache.http.headers": { "level": "WARN" }, "com.spredfast.shade.apache.http.impl.client.ProxyAuthenticationStrategy": { "level": "WARN" }, "com.spredfast.shade.apache.http.impl.client.TargetAuthenticationStrategy": { "level": "WARN" }, "com.spredfast.shade.apache.http.impl.conn.DefaultClientConnection": { "level": "WARN" }, "com.spredfast.shade.apache.http.impl.conn.DefaultClientConnectionOperator": { "level": "WARN" }, "com.spredfast.shade.apache.http.impl.conn.DefaultHttpResponseParser": { "level": "WARN" }, "com.spredfast.shade.apache.http.impl.conn.PoolingClientConnectionManager": { "level": "WARN" }, "com.spredfast.shade.apache.http.wire": { "level": "WARN" }, "com.sugarcrm.kafka.connect.config.KubernetesAuth": { "level": "WARN" }, "com.sugarcrm.kafka.connect.config.VaultConfigProvider": { "level": "WARN" }, "com.sugarcrm.kafka.connect.transforms.ValueRouter": { "level": "WARN" }, "io.debezium.config.Configuration": { "level": "WARN" }, "io.debezium.connector.base.ChangeEventQueue": { "level": "WARN" }, "io.debezium.connector.common.BaseSourceTask": { "level": "WARN" }, "io.debezium.connector.common.RelationalBaseSourceConnector": { "level": "WARN" }, "io.debezium.connector.mysql.MySqlChangeRecordEmitter": { "level": "WARN" }, "io.debezium.connector.mysql.MySqlConnection": { "level": "WARN" }, "io.debezium.connector.mysql.MySqlConnector": { "level": "WARN" }, "io.debezium.connector.mysql.MySqlConnectorConfig": { "level": "WARN" }, "io.debezium.connector.mysql.MySqlConnectorTask": { "level": "WARN" }, "io.debezium.connector.mysql.MySqlDatabaseSchema": { "level": "WARN" }, "io.debezium.connector.mysql.MySqlSchema": { "level": "ERROR" }, "io.debezium.connector.mysql.MySqlSnapshotChangeEventSource": { "level": "WARN" }, "io.debezium.connector.mysql.MySqlStreamingChangeEventSource": { "level": "WARN" }, "io.debezium.connector.mysql.MySqlValueConverters": { "level": "ERROR" }, "io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser": { "level": "WARN" }, "io.debezium.connector.mysql.antlr.listener.AlterTableParserListener": { "level": "WARN" }, "io.debezium.connector.mysql.antlr.listener.RenameTableParserListener": { "level": "WARN" }, "io.debezium.connector.sqlserver.SqlServerConnector": { "level": "WARN" }, "io.debezium.connector.sqlserver.SqlServerConnectorTask": { "level": "ERROR" }, "io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource": { "level": "ERROR" }, "io.debezium.jdbc.JdbcConnection": { "level": "WARN" }, "io.debezium.pipeline.ChangeEventSourceCoordinator": { "level": "WARN" }, "io.debezium.pipeline.ErrorHandler": { "level": "WARN" }, "io.debezium.pipeline.EventDispatcher": { "level": "WARN" }, "io.debezium.pipeline.metrics.StreamingChangeEventSourceMetrics": { "level": "WARN" }, "io.debezium.pipeline.signal.Log": { "level": "WARN" }, "io.debezium.pipeline.signal.SchemaChanges": { "level": "WARN" }, "io.debezium.pipeline.signal.Signal": { "level": "WARN" }, "io.debezium.pipeline.source.AbstractSnapshotChangeEventSource": { "level": "WARN" }, "io.debezium.pipeline.txmetadata.TransactionMonitor": { "level": "WARN" }, "io.debezium.relational.RelationalDatabaseSchema": { "level": "WARN" }, "io.debezium.relational.RelationalSnapshotChangeEventSource": { "level": "WARN" }, "io.debezium.relational.TableSchema": { "level": "WARN" }, "io.debezium.relational.TableSchemaBuilder": { "level": "WARN" }, "io.debezium.relational.history.ConnectTableChangeSerializer": { "level": "WARN" }, "io.debezium.relational.history.DatabaseHistoryMetrics": { "level": "WARN" }, "io.debezium.relational.history.KafkaDatabaseHistory": { "level": "WARN" }, "io.debezium.schema.TopicSelector$TopicNameSanitizer": { "level": "WARN" }, "io.debezium.util.IoUtil": { "level": "WARN" }, "io.debezium.util.SchemaNameAdjuster": { "level": "ERROR" }, "io.debezium.util.Threads": { "level": "WARN" }, "kafka": { "level": "ERROR" }, "org.I0Itec.zkclient": { "level": "ERROR" }, "org.apache.kafka": { "level": "ERROR" }, "org.apache.kafka.clients.ClientUtils": { "level": "ERROR" }, "org.apache.kafka.clients.ClusterConnectionStates": { "level": "ERROR" }, "org.apache.kafka.clients.CommonClientConfigs": { "level": "ERROR" }, "org.apache.kafka.clients.FetchSessionHandler": { "level": "ERROR" }, "org.apache.kafka.clients.Metadata": { "level": "ERROR" }, "org.apache.kafka.clients.NetworkClient": { "level": "ERROR" }, "org.apache.kafka.clients.admin.AdminClientConfig": { "level": "ERROR" }, "org.apache.kafka.clients.admin.KafkaAdminClient": { "level": "ERROR" }, "org.apache.kafka.clients.admin.internals.AdminMetadataManager": { "level": "ERROR" }, "org.apache.kafka.clients.consumer.ConsumerConfig": { "level": "ERROR" }, "org.apache.kafka.clients.consumer.KafkaConsumer": { "level": "ERROR" }, "org.apache.kafka.clients.consumer.internals.AbstractCoordinator": { "level": "ERROR" }, "org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread": { "level": "ERROR" }, "org.apache.kafka.clients.consumer.internals.AbstractPartitionAssignor": { "level": "ERROR" }, "org.apache.kafka.clients.consumer.internals.ConsumerCoordinator": { "level": "ERROR" }, "org.apache.kafka.clients.consumer.internals.ConsumerInterceptors": { "level": "ERROR" }, "org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient": { "level": "ERROR" }, "org.apache.kafka.clients.consumer.internals.Fetcher": { "level": "ERROR" }, "org.apache.kafka.clients.consumer.internals.Heartbeat": { "level": "ERROR" }, "org.apache.kafka.clients.consumer.internals.OffsetsForLeaderEpochClient": { "level": "ERROR" }, "org.apache.kafka.clients.consumer.internals.PartitionAssignorAdapter": { "level": "ERROR" }, "org.apache.kafka.clients.consumer.internals.SubscriptionState": { "level": "ERROR" }, "org.apache.kafka.clients.producer.KafkaProducer": { "level": "ERROR" }, "org.apache.kafka.clients.producer.ProducerConfig": { "level": "ERROR" }, "org.apache.kafka.clients.producer.internals.ProducerBatch": { "level": "ERROR" }, "org.apache.kafka.clients.producer.internals.ProducerInterceptors": { "level": "ERROR" }, "org.apache.kafka.clients.producer.internals.ProducerMetadata": { "level": "ERROR" }, "org.apache.kafka.clients.producer.internals.RecordAccumulator": { "level": "ERROR" }, "org.apache.kafka.clients.producer.internals.Sender": { "level": "ERROR" }, "org.apache.kafka.common.config.AbstractConfig": { "level": "ERROR" }, "org.apache.kafka.common.config.SecurityConfig": { "level": "ERROR" }, "org.apache.kafka.common.metrics.JmxReporter": { "level": "ERROR" }, "org.apache.kafka.common.metrics.Metrics": { "level": "ERROR" }, "org.apache.kafka.common.network.NetworkReceive": { "level": "ERROR" }, "org.apache.kafka.common.network.Selector": { "level": "ERROR" }, "org.apache.kafka.common.network.SslChannelBuilder": { "level": "ERROR" }, "org.apache.kafka.common.network.SslTransportLayer": { "level": "ERROR" }, "org.apache.kafka.common.protocol.Errors": { "level": "ERROR" }, "org.apache.kafka.common.record.MemoryRecords": { "level": "ERROR" }, "org.apache.kafka.common.requests.FetchMetadata": { "level": "ERROR" }, "org.apache.kafka.common.requests.OffsetFetchRequest": { "level": "ERROR" }, "org.apache.kafka.common.security.ssl.DefaultSslEngineFactory": { "level": "ERROR" }, "org.apache.kafka.common.security.ssl.SslFactory": { "level": "ERROR" }, "org.apache.kafka.common.utils.AppInfoParser": { "level": "ERROR" }, "org.apache.kafka.common.utils.KafkaThread": { "level": "ERROR" }, "org.apache.kafka.common.utils.Utils": { "level": "ERROR" }, "org.apache.kafka.connect.cli.ConnectDistributed": { "level": "ERROR" }, "org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy": { "level": "ERROR" }, "org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy": { "level": "ERROR" }, "org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy": { "level": "ERROR" }, "org.apache.kafka.connect.data.Values": { "level": "ERROR" }, "org.apache.kafka.connect.json.JsonConverterConfig": { "level": "ERROR" }, "org.apache.kafka.connect.mirror.MirrorCheckpointConnector": { "level": "ERROR" }, "org.apache.kafka.connect.mirror.MirrorSourceConnector": { "level": "ERROR" }, "org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.Connect": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.ConnectMetrics": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.ConnectorConfig": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.HerderConnectorContext": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.SinkConnectorConfig": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.SourceConnectorConfig": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.TaskConfig": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.TransformationChain": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.Worker": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.WorkerConfig": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.WorkerConfigTransformer": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.WorkerConnector": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.WorkerInfo": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.WorkerSinkTask": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.WorkerSinkTaskContext": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.WorkerSourceTask": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.WorkerTask": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.distributed.DistributedConfig": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.distributed.DistributedHerder": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.distributed.EagerAssignor": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.distributed.IncrementalCooperativeAssignor": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.distributed.WorkerCoordinator": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.distributed.WorkerGroupMember": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.errors.LogReporter": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.isolation.PluginClassLoader": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.isolation.PluginUtils": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.isolation.Plugins": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.rest.ConnectRestConfigurable": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.rest.RestServer": { "level": "ERROR" }, "org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource": { "level": "ERROR" }, "org.apache.kafka.connect.storage.KafkaConfigBackingStore": { "level": "ERROR" }, "org.apache.kafka.connect.storage.KafkaOffsetBackingStore": { "level": "ERROR" }, "org.apache.kafka.connect.storage.KafkaStatusBackingStore": { "level": "ERROR" }, "org.apache.kafka.connect.storage.OffsetStorageReaderImpl": { "level": "ERROR" }, "org.apache.kafka.connect.storage.OffsetStorageWriter": { "level": "ERROR" }, "org.apache.kafka.connect.storage.SimpleHeaderConverter": { "level": "ERROR" }, "org.apache.kafka.connect.tools.MockConnector": { "level": "ERROR" }, "org.apache.kafka.connect.transforms.util.SimpleConfig": { "level": "ERROR" }, "org.apache.kafka.connect.util.ConnectUtils": { "level": "ERROR" }, "org.apache.kafka.connect.util.KafkaBasedLog": { "level": "ERROR" }, "org.apache.kafka.connect.util.TopicAdmin": { "level": "ERROR" }, "org.apache.kafka.connect.util.TopicCreation": { "level": "ERROR" }, "org.apache.zookeeper": { "level": "ERROR" }, "org.eclipse.jetty": { "level": "ERROR" }, "org.eclipse.jetty.http.HttpFields": { "level": "ERROR" }, "org.eclipse.jetty.http.HttpGenerator": { "level": "ERROR" }, "org.eclipse.jetty.http.HttpParser": { "level": "ERROR" }, "org.eclipse.jetty.http.MimeTypes": { "level": "ERROR" }, "org.eclipse.jetty.http.PreEncodedHttpField": { "level": "ERROR" }, "org.eclipse.jetty.http.pathmap.PathMappings": { "level": "ERROR" }, "org.eclipse.jetty.http.pathmap.ServletPathSpec": { "level": "ERROR" }, "org.eclipse.jetty.io.AbstractConnection": { "level": "ERROR" }, "org.eclipse.jetty.io.AbstractEndPoint": { "level": "ERROR" }, "org.eclipse.jetty.io.ChannelEndPoint": { "level": "ERROR" }, "org.eclipse.jetty.io.FillInterest": { "level": "ERROR" }, "org.eclipse.jetty.io.IdleTimeout": { "level": "ERROR" }, "org.eclipse.jetty.io.ManagedSelector": { "level": "ERROR" }, "org.eclipse.jetty.io.MappedByteBufferPool": { "level": "ERROR" }, "org.eclipse.jetty.io.SelectorManager": { "level": "ERROR" }, "org.eclipse.jetty.io.WriteFlusher": { "level": "ERROR" }, "org.eclipse.jetty.server.AbstractConnector": { "level": "ERROR" }, "org.eclipse.jetty.server.CustomRequestLog": { "level": "ERROR" }, "org.eclipse.jetty.server.HttpChannel": { "level": "ERROR" }, "org.eclipse.jetty.server.HttpChannelOverHttp": { "level": "ERROR" }, "org.eclipse.jetty.server.HttpChannelState": { "level": "ERROR" }, "org.eclipse.jetty.server.HttpConfiguration": { "level": "ERROR" }, "org.eclipse.jetty.server.HttpConnection": { "level": "ERROR" }, "org.eclipse.jetty.server.HttpInput": { "level": "ERROR" }, "org.eclipse.jetty.server.HttpOutput": { "level": "ERROR" }, "org.eclipse.jetty.server.Request": { "level": "ERROR" }, "org.eclipse.jetty.server.Response": { "level": "ERROR" }, "org.eclipse.jetty.server.Server": { "level": "ERROR" }, "org.eclipse.jetty.server.handler.AbstractHandler": { "level": "ERROR" }, "org.eclipse.jetty.server.handler.AbstractHandlerContainer": { "level": "ERROR" }, "org.eclipse.jetty.server.handler.AllowSymLinkAliasChecker": { "level": "ERROR" }, "org.eclipse.jetty.server.handler.ContextHandler": { "level": "ERROR" }, "org.eclipse.jetty.server.handler.ContextHandler.ROOT": { "level": "ERROR" }, "org.eclipse.jetty.server.handler.ContextHandlerCollection": { "level": "ERROR" }, "org.eclipse.jetty.server.handler.DefaultHandler": { "level": "ERROR" }, "org.eclipse.jetty.server.handler.ErrorHandler": { "level": "ERROR" }, "org.eclipse.jetty.server.handler.StatisticsHandler": { "level": "ERROR" }, "org.eclipse.jetty.server.session": { "level": "ERROR" }, "org.eclipse.jetty.servlet.BaseHolder": { "level": "ERROR" }, "org.eclipse.jetty.servlet.Holder": { "level": "ERROR" }, "org.eclipse.jetty.servlet.ServletContextHandler": { "level": "ERROR" }, "org.eclipse.jetty.servlet.ServletHandler": { "level": "ERROR" }, "org.eclipse.jetty.servlet.ServletHolder": { "level": "ERROR" }, "org.eclipse.jetty.util.DecoratedObjectFactory": { "level": "ERROR" }, "org.eclipse.jetty.util.DeprecationWarning": { "level": "ERROR" }, "org.eclipse.jetty.util.IO": { "level": "ERROR" }, "org.eclipse.jetty.util.Jetty": { "level": "ERROR" }, "org.eclipse.jetty.util.ModuleLocation": { "level": "ERROR" }, "org.eclipse.jetty.util.SharedBlockingCallback": { "level": "ERROR" }, "org.eclipse.jetty.util.TypeUtil": { "level": "ERROR" }, "org.eclipse.jetty.util.URIUtil": { "level": "ERROR" }, "org.eclipse.jetty.util.Utf8Appendable": { "level": "ERROR" }, "org.eclipse.jetty.util.component.AbstractLifeCycle": { "level": "ERROR" }, "org.eclipse.jetty.util.component.ContainerLifeCycle": { "level": "ERROR" }, "org.eclipse.jetty.util.log": { "level": "ERROR" }, "org.eclipse.jetty.util.resource.JarFileResource": { "level": "ERROR" }, "org.eclipse.jetty.util.resource.JarResource": { "level": "ERROR" }, "org.eclipse.jetty.util.resource.Resource": { "level": "ERROR" }, "org.eclipse.jetty.util.resource.URLResource": { "level": "ERROR" }, "org.eclipse.jetty.util.thread.QueuedThreadPool": { "level": "ERROR" }, "org.eclipse.jetty.util.thread.ReservedThreadExecutor": { "level": "ERROR" }, "org.eclipse.jetty.util.thread.ShutdownThread": { "level": "ERROR" }, "org.eclipse.jetty.util.thread.ThreadPoolBudget": { "level": "ERROR" }, "org.eclipse.jetty.util.thread.strategy.EatWhatYouKill": { "level": "ERROR" }, "org.reflections": { "level": "ERROR" }, "org.reflections.Reflections": { "level": "ERROR" }, "root": { "level": "WARN" } } ```
sknot-rh commented 3 years ago

Hm... that is strange. Did you try to roll the connect pod manually? I know that is not a solution, but at least we would know whether it helps.

mikekamornikov commented 3 years ago

What do you want to check this way? Strimzi version of Kafka Connect image without operator? I can try that ...

sknot-rh commented 3 years ago

Whether after the pod restart the correct logger level is used and no WARN records for io.debezium.util.SchemaNameAdjuster will be printed.

mikekamornikov commented 3 years ago

@sknot-rh It took some time but here's what I've got. I've deleted the pod 2d ago to let deployment recreate it (see 40h)

Every 2.0s: kubectl get pods                                                               mkamornikov: Fri May 21 15:33:33 2021

NAME                                                              READY   STATUS    RESTARTS   AGE
***-connect-644bbfddbc-whffw                    1/1     Running   0          40h

After that connector (debezium mysql) was paused and after 1d restarted:

May 20, 2021 11:37:31 PM com.github.shyiko.mysql.binlog.BinaryLogClient connect
INFO: Connected to ***.us-west-2.rds.amazonaws.com:3306 at mysql-bin-changelog.482880/134782545 (sid:6401, cid:59093907)

I still see the warnings from logger which is supposed to log just errors:

{
  "@timestamp": "2021-05-21T02:31:19.913Z",
  "source_host": "***-connect-644bbfddbc-whffw",
  "file": "SchemaNameAdjuster.java",
  "method": "lambda$create$1",
  "level": "WARN",
  "line_number": "168",
  "thread_name": "blc-***.us-west-2.rds.amazonaws.com:3306",
  "@version": 1,
  "logger_name": "io.debezium.util.SchemaNameAdjuster",
  "message": "The Kafka Connect schema name '***.Envelope' is not a valid Avro schema name, so replacing with '***.Envelope'",
  "class": "io.debezium.util.SchemaNameAdjuster",
  "mdc": {
    "dbz.connectorContext": "binlog",
    "dbz.connectorType": "MySQL",
    "connector.context": "[***-cdc-source|task-0] ",
    "dbz.connectorName": "***"
  }
}

Logger settings are the same and api returns:

  "io.debezium.util.SchemaNameAdjuster": {
    "level": "ERROR"
  },
sknot-rh commented 3 years ago

Another user was experiencing a similar issue. The fix should be done in https://github.com/strimzi/strimzi-kafka-operator/pull/5145

scholzj commented 3 years ago

Closing as it should be fixed in #5145