confluentinc / librdkafka

The Apache Kafka C/C++ library
Other
284 stars 3.15k forks source link

ssl.ca.location and related settings do not affect libcurl requests for OIDC token endpoints #3751

Open lpsinger opened 2 years ago

lpsinger commented 2 years ago

libcurl is used to request the OIDC token endpoint, but libcurl's SSL settings are not configured by librdkafka.

Description

The ssl.ca.location setting in particular is important because the system on which librdkafka is run may have CA certificates in a different location than the system on which librdkafka was built. However the SSL settings only affect the SSL context used for the broker connection, and do not affect the SSL context used by libcurl for HTTPS requests.

How to reproduce

N/A

Checklist

IMPORTANT: We will close issues where the checklist has not been completed.

Please provide the following information:

edenhill commented 2 years ago

Right, I wonder though if they can always be considered to require the same CA chain, seeing how they could be run by different organizations.

Maybe we need a separate sasl.oauthbearer.oidc.ssl.ca.location property that defaults to ssl.ca.location ?

lpsinger commented 2 years ago

Indeed. I was hoping that there would be some guidance from the Kafka Java client config, but there is not.

But I think that the common case that both need to be able to find the OS CA bundle is the most important one.

edenhill commented 2 years ago

Okay, so I think we have three different scenarios we want to support:

  1. inherit: Use same CA chain as ssl.ca.location. This should be the default.
  2. system: Use the system default CA chain,.
  3. /some/path: Use a specific CA chain
lpsinger commented 2 years ago

Unfortunately the new KIP-768 support in confluent-kafka-python does not work at all in the wheels because the ssl.ca.location = 'probe' behavior has no impact on curl.

I think that this is going to evolve into a major refactoring of the ssl context configuration, because almost every single setting under ssl.* will need to be replicated under sasl.oauthbearer.oidc.ssl.*. For instance, one may legitimately want to do mTLS with the IdP.

RafalSkolasinski commented 1 year ago

I don't see any activity here so just wanted to check - is there currently any workaround for situation where CAs are not located under /etc/ssl/certs/ca-certificates.crt?

lpsinger commented 1 year ago

I don't see any activity here so just wanted to check - is there currently any workaround for situation where CAs are not located under /etc/ssl/certs/ca-certificates.crt?

Our current workaround is to not use confluent-kafka-python's built-in KIP-768 support and instead to use a workalike oauth_cb written in pure Python. Here is an example in our project: https://github.com/nasa-gcn/gcn-kafka-python/blob/main/gcn_kafka/oidc.py

RafalSkolasinski commented 1 year ago

We are using golang clients in our context. If I understand you right @lpsinger, you are basically fetching JWT tokens on your own and pass it to Kafka client?

lpsinger commented 1 year ago

Yes. The oath callback mechanism exists in librdkafka too, so you could implement an analogous workaround in go.

JohnPreston commented 8 months ago

FYI I didn't face this issue with using Python images built based on Amazon Linux 2/2023, but indeed that's a problem it seems for all Debian-based builds. My workaround was to simply symlink the ca-bundle cert to the path the library complains it can't find it.

For example, see https://github.com/JohnPreston/kafka-overwatch/blob/main/Dockerfile#L21-L23

But that does warrant the idea of being able to provide the sasl.oauth.ca.location (or something along that name) as an option for the users to override the default.

thomasnal commented 7 months ago

@lpsinger This post is tremendously useful. I have implemented the callback, however, librdkafka seems to be bugged handling OAuth extensions together with the callback. Do you use extensions in your case?

The callback retrieves the token - thanks for the sample. But Kafka fails the whole authentication with error,

SASL authentication error: Authentication failed: 1 extensions are invalid! 
They are: logicalCluster: CLUSTER_ID_MISSING_OR_EMPTY (after 5268ms in state AUTH_REQ) (_AUTHENTICATION)

I verified the extensions attribute is provided the correct value 'sasl.oauthbearer.extensions': 'logicalCluster=***,identityPoolId=***',. Since I am connecting to confluent Kafka the extensions attribute is required.

lpsinger commented 7 months ago

Do you use extensions in your case?

No, we are not setting sasl.oauthbearer.extensions in our project.

thomasnal commented 7 months ago

Thanks. I have just been through Kafka source code. I figured out they expect the callback to return extensions as extra arguments in the tuple. For anyone else, it expects the extensions in form of a dict.

ext = {'logicalCluster':'***,'identityPoolId':'***'}
principal = ''  # No complaints when empty string
return token["access_token"], token["expires_at"], principal, ext