confluentinc / schema-registry

Confluent Schema Registry for Kafka
https://docs.confluent.io/current/schema-registry/docs/index.html
Other
2.2k stars 1.11k forks source link

ServiceLoader: SaslBasicAuthCredentialProvider not a subtype #1028

Open dylanmei opened 5 years ago

dylanmei commented 5 years ago

When following these instructions and adding any kind of basic.auth.credentials.source to an Key or Value Converter I trigger a ServiceLoader failure that occurs in BasicAuthCredentialProviderFactory when launching a conector (S3 in my case):

[2019-02-22 14:01:10,167] INFO AvroConverterConfig values: 
    schema.registry.url = [http://schema-registry:8080]
    basic.auth.user.info = [hidden]
    auto.register.schemas = false
    max.schemas.per.subject = 1000
    basic.auth.credentials.source = USER_INFO
    schema.registry.basic.auth.user.info = [hidden]
    value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
    key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
  (io.confluent.connect.avro.AvroConverterConfig)
[2019-02-22 14:01:10,182] ERROR Failed to start task s3-connect-test-0 (org.apache.kafka.connect.runtime.Worker)
 java.util.ServiceConfigurationError: io.confluent.kafka.schemaregistry.client.security.basicauth.BasicAuthCredentialProvider: Provider io.confluent.kafka.schemaregistry.client.security.basicauth.SaslBasicAuthCredentialProvider not a subtype
    at java.util.ServiceLoader.fail(ServiceLoader.java:239)
    at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
    at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
    at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
    at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
    at io.confluent.kafka.schemaregistry.client.security.basicauth.BasicAuthCredentialProviderFactory.<clinit>(BasicAuthCredentialProviderFactory.java:30)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.configureRestService(CachedSchemaRegistryClient.java:108)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.<init>(CachedSchemaRegistryClient.java:93)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.<init>(CachedSchemaRegistryClient.java:80)
    at io.confluent.connect.avro.AvroConverter.configure(AvroConverter.java:65)
    at org.apache.kafka.connect.runtime.isolation.Plugins.newConverter(Plugins.java:266)
    at org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:433)
    at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:873)
    at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1600(DistributedHerder.java:111)
    at org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:888)
    at org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:884)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

I get the same results when I use any of the URL, USER_INFO, or SASL_INHERIT options. I can reproduce with these steps:

  1. run Docker compose-file, below
  2. create topic and add data: echo "hello" | kafkacat -b localhost:9092 -t connect-test -P -- it doesn't matter that this isn't Avro; we never get that far
  3. submit sink connector config: curl -XPOST -H "Content-Type: application/json" http://localhost:8083/connectors -d @connector-config.json -- it doesn't matter whether you have valid AWS credentials, we never get that far

conector-config.json

{
  "name": "s3-connect-test",
  "config": {
    "locale": "en",
    "topics": "connect-test",
    "storage.class": "io.confluent.connect.s3.storage.S3Storage",
    "connector.class": "io.confluent.connect.s3.S3SinkConnector",
    "format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
    "key.converter.schemas.enable": "false",
    "schema.compatibility": "NONE",

    "partition.duration.ms": "600000",
    "topics.dir": "",
    "flush.size": "20000",
    "tasks.max": "1",
    "timezone": "UTC",
    "s3.part.size": "33554432",
    "s3.compression.type": "none",
    "s3.acl.canned": "bucket-owner-full-control",
    "partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
    "compression": "none",
    "rotate.schedule.interval.ms": "600000",
    "s3.bucket.name": "some-bucket-doesnt-matter"
  }
}

docker-compose.yml

version: "3"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:5.1.1
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
    ports:
    - 2181:2181
    logging: { driver: none }

  broker:
    image: confluentinc/cp-kafka:5.1.1
    ports:
    - 9092:9092
    - 9011:9011
    environment:
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
      KAFKA_LISTENERS: "PUBLIC://0.0.0.0:9092,INTERNAL://0.0.0.0:19092"
      KAFKA_ADVERTISED_LISTENERS: "PUBLIC://localhost:9092,INTERNAL://broker:19092"
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "PUBLIC:PLAINTEXT,INTERNAL:PLAINTEXT"
      KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"
      KAFKA_NUM_PARTITIONS: 2
      KAFKA_DEFAULT_REPLICATION_FACTOR: 1
      KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS: 10
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    depends_on: [zookeeper]
    logging: { driver: none }

  schema-registry:
    image: confluentinc/cp-schema-registry:5.1.1
    hostname: schema-registry
    ports:
    - 8080:8080
    - 9012:9011
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_LISTENERS: "http://0.0.0.0:8080"

      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "PLAINTEXT://broker:19092"
      SCHEMA_REGISTRY_KAFKASTORE_TOPIC: __schemas
      SCHEMA_REGISTRY_KAFKASTORE_TOPIC_REPLICATION_FACTOR: 1
      SCHEMA_REGISTRY_KAFKASTORE_TIMEOUT_MS: 15000
    depends_on: [broker]
    logging: { driver: none }

  connect:
    image: confluentinc/cp-kafka-connect:5.1.1
    ports:
    - 8083:8083
    - 9013:9011
    environment:
      CONNECT_BOOTSTRAP_SERVERS: broker:19092
      CONNECT_REST_ADVERTISED_HOST_NAME: connect
      CONNECT_GROUP_ID: connect
      CONNECT_CONFIG_STORAGE_TOPIC: __connect_configs
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_TOPIC: __connect_offsets
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_TOPIC: __connect_status
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1

      CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter

      CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_KEY_CONVERTER_AUTO_REGISTER_SCHEMAS: "false"
      CONNECT_KEY_CONVERTER_BASIC_AUTH_CREDENTIALS_SOURCE: USER_INFO
      CONNECT_KEY_CONVERTER_BASIC_AUTH_USER_INFO: "foo:bar"
      CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8080

      CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_VALUE_CONVERTER_AUTO_REGISTER_SCHEMAS: "false"
      CONNECT_VALUE_CONVERTER_BASIC_AUTH_CREDENTIALS_SOURCE: USER_INFO
      CONNECT_VALUE_CONVERTER_BASIC_AUTH_USER_INFO: "foo:bar"
      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8080

      CONNECT_PLUGIN_PATH: /usr/share/java
    depends_on: [broker]

I've tried many variations on plugin.path, trying to cleanup the CLASSPATH, removing every other connector, and so on. The only way I get this to work is by deleting this jar /usr/share/java/kafka-connect-storage-common/kafka-schema-registry-client-5.1.1.jar which seems to conflict with the many other versions of this jar on the CLASSPATH.

rchady commented 5 years ago

I am seeing the same error on 5.1.2. However, even removing that jar I still get the error. Any ideas on this?

AnatolyTikhonov commented 5 years ago

I have similar error on 5.1.3 and 5.2.1 Following the above error I have also

java.lang.NoClassDefFoundError: Could not initialize class io.confluent.kafka.schemaregistry.client.security.basicauth.BasicAuthCredentialProviderFactory
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.configureRestService(CachedSchemaRegistryClient.java:107)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.<init>(CachedSchemaRegistryClient.java:92)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.<init>(CachedSchemaRegistryClient.java:79)
AnatolyTikhonov commented 5 years ago

Update: I solved this problem by deleting the dependency "kafka-schema-registry-client" from my custom connector, and also explicitly excluded it from kafka-avro-serializer and kafka-connect-avro-converter as following

<dependency>
            <groupId>io.confluent</groupId>
            <artifactId>kafka-connect-avro-converter</artifactId>
            <version>${confluent.version}</version>
            <exclusions>
                <exclusion>
                    <artifactId>kafka-schema-registry-client</artifactId>
                    <groupId>io.confluent</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>io.confluent</groupId>
            <artifactId>kafka-avro-serializer</artifactId>
            <version>${confluent.version}</version>
            <exclusions>
                <exclusion>
                    <artifactId>kafka-schema-registry-client</artifactId>
                    <groupId>io.confluent</groupId>
                </exclusion>
            </exclusions>
        </dependency>

Hope this helps @rchady

stefanthoss commented 5 years ago

I also see the errors

java.util.ServiceConfigurationError: io.confluent.kafka.schemaregistry.client.security.basicauth.BasicAuthCredentialProvider: Provider io.confluent.kafka.schemaregistry.client.security.basicauth.SaslBasicAuthCredentialProvider not a subtype
    at java.util.ServiceLoader.fail(ServiceLoader.java:239)
    at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
    at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
    at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
    at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
    at io.confluent.kafka.schemaregistry.client.security.basicauth.BasicAuthCredentialProviderFactory.<clinit>(BasicAuthCredentialProviderFactory.java:30)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.configureRestService(CachedSchemaRegistryClient.java:107)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.<init>(CachedSchemaRegistryClient.java:92)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.<init>(CachedSchemaRegistryClient.java:79)

and

java.lang.NoClassDefFoundError: Could not initialize class io.confluent.kafka.schemaregistry.client.security.basicauth.BasicAuthCredentialProviderFactory
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.configureRestService(CachedSchemaRegistryClient.java:107)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.<init>(CachedSchemaRegistryClient.java:92)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.<init>(CachedSchemaRegistryClient.java:79)

on my Kafka Connect cluster, similar to what @dylanmei experienced. I'm running the Docker image confluentinc/cp-kafka-connect:5.1.2, so the trick with excluding a Maven dependency doesn't work. Restarting the connectors does not resolve the issue, neither does restarting the Docker container. Any idea how to resolve and prevent it?