confluentinc / ide-sidecar

Sidecar application used by Confluent for VS Code, as a local proxy for Confluent Cloud, Confluent Platform and local Kafka clusters, to help users build streaming applications.
Apache License 2.0
3 stars 3 forks source link

Added basic and API key and secret credentials for Kafka and SR cluster configs in direct connections #152

Closed rhauch closed 19 hours ago

rhauch commented 5 days ago

Summary of Changes

Resolves #124

Adds basic and API key+secret credentials to direct connections, including validating the credentials in the Connections API and using credentials when connecting to the Kafka cluster and SR defined in the direct connection spec.

New Credentials types

The Credentials interface and BasicCredentials and ApiKeyAndSecret record types have methods that build the auth-related configuration properties for Kafka clients and SR clients. Each concrete Credentials type customizes the logic, though parameters are used to supply information not in the Credentials objects.

The Credentials interface defines three methods that will likely be overridden by each concrete subtype:

New Redactable types for write-only objects

The BasicCredentials has a password field, and the ApiKeyAndSecret record type has a api_secret field. Because these fields will contain secrets, they must ensure that these fields are always masked (e.g., ********) when written to the log or in API responses.

To do this, this PR defines a new Password class and ApiSecret class that extend a new Redactable abstract class representing any literal String value that must be redacted in all API responses and never logged in messages (or output by the sidecar). These are essentially write-only values that prevent external reads. The Redactable class includes a custom serializer that always writes a masked representation consisting of exactly eight asterisk (*) characters regardless of the actual literal value. The toString() method also outputs the same masked representation, primarily to help prevent sensitive literal values from being included in logs or exception messages. There are also a few methods that can be used in validating, such as checking whether the value is empty or longer than some size. The hashCode() and equals() methods never use the value. All of these methods are marked as final to ensure subclasses do not alter the behavior.)

Building Kafka and SR client configurations

The logic to build the complete configurations for the Kafka admin, consumer and producer clients and the Schema Registry clients are moved into a new ClientConfigurator bean that is @ApplicationScoped. These methods rely upon the Credentials methods for the auth-related config properties and the KafkaCluster or SchemaRegistry cluster for the remaining configuration properties.

The ClientConfigurator bean’s methods have a boolean parameter as to whether the resulting configuration should redact secrets, so that the configuration can be expose the connection properties to the user, say to allow them to copy the connection properties and use them in their application, or if we use the generated (but redacted) connection configs in the template service. But the AdminClients, KafkaProducerClients, KafkaConsumerFactory and SchemaRegistryClients beans use the configurator and do not redact the configuration.

New methods have been added to the ConnectionState class to make it easy to get the Credentials for a Kafka cluster with a given ID or a Schema Registry cluster with a given ID. The DirectConnectionState subclass always returns the credentials for the one Kafka cluster or one SR cluster. In the future, other ConnectionState subclasses (e.g., for CP MDS) might need to maintain a map of credentials by cluster ID for any clusters do not have the same MDS credentials (e.g., the Kafka or SR cluster does not delegate authN functionality to MDS).

Adding other types of credentials in the future

In the future, the only thing we need to do to support other types of authN credentials, such as OAuth 2.0, mTLS, Kerberos (SASL/GSSAPI), etc., is to define new Credentials subtypes and implement the methods to construct the auth-related client properties using the subtype-specific credential information.

Limitations

There are a few shortcuts taken for direct connections that will be addressed in subsequent PR as part of #123:

Testing

I've done some manual testing with quarkus:dev and native executable by using the REST API to create a direct connection that uses a CCloud cluster with API key and secret, and have verified the admin client and consumer clients are built correctly and will successfully work with the remote cluster.

Pull request checklist

Please check if your PR fulfills the following (if applicable):

rhauch commented 2 days ago

Several builds failed with OOM errors when trying to start the Apicurio container during our tests. Disabling the Apicurio dev services that starts an Apicurio container during testing worked, and will be fixed separately with #156.

Also, we should consider upgrading the CI machine types (#155), since we're clearly getting close to the limit of our CI machine types. Disabling the Apicurio container will help in the short term, but we'll soon be adding other containers for CP.

rhauch commented 1 day ago

I've done some manual testing with quarkus:dev and native executable by using the REST API to create a direct connection that uses a CCloud cluster with API key and secret, and have verified the admin client and consumer clients are built correctly and will successfully work with the remote cluster.

There are a few shortcuts taken for direct connections that will be addressed in subsequent PR as part of #123:

rhauch commented 1 day ago

I've updated the PR description, and I think this PR is in state that's ready to be merged, pending approval.