confluentinc / kafka-connect-storage-cloud

Kafka Connect suite of connectors for Cloud storage (Amazon S3)
Other
13 stars 330 forks source link

Support for custom truststore at Connector level #546

Open pjuarezd opened 2 years ago

pjuarezd commented 2 years ago

Would be great to have support at the S3-Sink connector level to use additional truststore, so that private CA and self-sgined certificates can be trusted.

For instance, using the property store.url to store in a S3 compatible server over https. Nowadays if the S3 compatible server is setup with a private CA or selft-signed certificates in a airgap environment, the S3-SInk connector fails the SSL verification.

The way how some user have been working around this is by adding the private CA or self-signed certificates to the default java cacerts, however that requieres root acces to the container/machine running the java process, and this is not always available.

pjuarezd commented 2 years ago

Created a PR to support the custom keystore https://github.com/confluentinc/kafka-connect-storage-cloud/pull/547

OneCricketeer commented 2 years ago

Why not add to the parent project in StorageSinkConnectorConfig? Or in Kafka source itself or that all connectors may use this property?

Also, why can't you use consumer.override prefix to set these values already?

joshuagrisham-karolinska commented 1 year ago

Also, why can't you use consumer.override prefix to set these values already?

Hi! Sorry to drag an older one back up but we ran into this problem today, it seemed worth if it could get picked up again since I am sure it will not be just me who is having this problem currently.

I can report that at least in 7.3.0 (edit: and kafka-connect-s3 version 10.5.1) that using the consumer.override prefix is not working with the S3 Sink Connector towards a custom S3 endpoint with a private server certificate.

My working assumption is that it is because this property is for "consuming" the message from Kafka which is already working fine and configured at the server level, and the underlying AWS libraries for writing to S3 don't seem to be reading from this truststore property anyway.

For kicks I tried producer.override as a prefix also but was pretty sure that would not do anything anyway since we are not writing to Kafka here with a Kafka producer client (and yes, it did nothing ;) )

OneCricketeer commented 1 year ago

The override prefix is not specific to any connector. It's a base feature of the Connect API and has been available since around 2.3, I want to say - https://kafka.apache.org/documentation/#connect_running

You do need to set the override policy on the worker first, otherwise, no the override will not apply

pjuarezd commented 10 months ago

hi @ OneCricketeer, circling back to this issue:

The overall goal is use setting store.url: https://custom-endpoint.url other than AWS S3 default endpoint.

I understand that you suggest support for custom keystore could make more sense to be added on StorageSinkConnectorConfig in the parent project https://github.com/confluentinc/kafka-connect-storage-common, I could do that.

Can you share documentation on how to target a custom S3 endpoint and trust a custom TLS certifcate using consumer.override? Found this example in https://github.com/confluentinc/confluent-kubernetes-examples repo, is this what you meant?

https://github.com/confluentinc/confluent-kubernetes-examples/blob/45331dfab5d08e8513b5532016a9c9b7b7e1553e/blueprints/cp-rbac-mtls-lb/cp-apps/connectors/connector_ss.yaml#L27-L28

Why not add to the parent project in StorageSinkConnectorConfig? Or in Kafka source itself or that all connectors may use this property?

Also, why can't you use consumer.override prefix to set these values already?

OneCricketeer commented 10 months ago

The mentioned override settings have no connection to S3, or Connector, only kafka client settings, so for TLS / SASL connection to the brokers, for example