1123 / confluent-cloud-service-broker

Service Broker for Apache Kafka
5 stars 3 forks source link

Confluent Cloud Shared Service Broker

This is a Kubernetes and Cloud Foundry service broker for provisioning and granting access to Kafka topics on Confluent Cloud, or for a dedicated Kafka Cluster. It is written according to the Open Service Broker API: https://www.openservicebrokerapi.org .
It does not provision Kafka clusters itself.

The service broker grants access to a central multi-tenant cluster and provides the following functionality:

Prerequisites

Testing

For integration testing a local Zookeeper server and Kafka broker are started.

Running locally

Running on Kubernetes

See the kubernetes subdirectory for installation Kubernetes. This has been tested with Google Kubernetes Service. The steps are as follows:

  1. Copy src/main/resources/application-ccloud.yaml to src/main/resources/<your-name>.yaml and adjust the credentials for accessing Confluent Cloud.
  2. build the project: mvn clean package
  3. copy the resulting jar to the kubernetes subdirectory: cp target/kafka-service-broker-1.0-SNAPSHOT.jar kubernetes/
  4. Edit kubernetes/Dockerfile and set the environment variable SPRING_PROFILES_ACTIVE to <your-name>
  5. build the image: cd kubernetes; docker build .
  6. push the image to a container registry that can be accessed by your Kubernetes cluster. If using Google Kubernetes service, you can use the build.sh script for this.
  7. Make sure the namespace catalog exists. Deploy the service-broker kubectl apply -f service-broker.yaml.
  8. install the service catalog API extension: install-service-catalog.sh
  9. create a kubernetes service object for accessing the service broker: kubectl apply -f service-broker-service.yaml
  10. Register the service broker with Kubernetes: kubectl apply -f service-broker-registration.yaml
  11. Create one or more Confluent Cloud service accounts and associated api keys via the ccloud cli. Post these service accounts to the service broker, such that it can supply them to client applications for accessing the topics. See the script post-accounts.sh for details.
  12. Create a topic via the service broker: kubectl apply -f service-instance.yaml
  13. Bind the topic: kubectl apply -f service-binding.yaml. This will create the kubernetes secret object that can be referenced from your Confluent Cloud client application.

Running on Cloud Foundry

  1. Adjust manifest-pcf-dev.yaml to your needs and copy to manifest.yml
  2. Push to cloud foundry cf push -f manifest.yml.
  3. register the service broker with cloud foundry cf create-service-broker kafka-broker <user> <password> http://kafka-service-broker.dev.cfdev.sh
  4. Enable service access: cf enable-service-access confluent-kafka
  5. Create one or more Confluent Cloud service accounts and associated api keys via the ccloud cli. Post these service accounts to the service broker, such that it can supply them to client applications for accessing the topics. See the script post-accounts.sh for details.
  6. Try out creating a topic: cf create-service confluent-kafka gold my-topic -c '{ "topic_name" : "gold-topic" }'
  7. Bind to an application: cf bind-service kafka-service-broker my-topic -c '{ "consumer_group" : "consumer_group_1" }'