kyma-project / eventing-auth-manager

Apache License 2.0
1 stars 7 forks source link

REUSE status

Eventing Auth Manager

The eventing-auth-manager is a central component that is deployed on the Kyma control-plane. The component manages applications in the SAP Cloud Identity Services - Identity Authentication (IAS) by creating and deleting them based on creation or deletion of a managed Kyma Runtime.

Table of contents

Description

The eventing-auth-manager manages the credentials for IAS applications used by webhooks in the eventing component.
When a new managed Kyma Runtime is provisioned the component creates a new OIDC IAS application for this runtime and stores the credentials in a secret on the new runtime.

eventing-auth-manager-overview

A Kyma CR is created for each runtime. The Eventing Auth Manager watches the creation and deletion of Kyma CRs. On the creation of a Kyma CR, the Eventing Auth Manager creates an EventingAuth CR. The reconciliation of the EventingAuth CR will create an application in IAS using the Application Directory REST API and the secret with the credentials on the managed runtime. When the Kyma CR is deleted, the controller deletes the EventingAuth CR. On the deletion of the EventingAuth CR the Eventing Auth Manager deletes the application in IAS and the secret on the runtime.

controller-flow

EventingAuth CR

For details, see the specification file.

Parameter Description
status.conditions Conditions associated with EventingAuthStatus. There are conditions for creation of IAS application and the secret of the managed runtime
status.iasApplication Application contains information about a created IAS application
status.iasApplication.name Name of the application in IAS
status.iasApplication.uuid Application ID in IAS
status.secret AuthSecret contains information about created K8s secret
status.secret.clusterId Runtime ID of the cluster where the secret is created
status.secret.namespacedName NamespacedName of the secret on the managed runtime
status.state State signifies current state of CustomObject. Value can be one of ("Ready", "NotReady").

eventing-webhook-auth secret

The secret created on the managed runtime is looks like the following:

apiVersion: v1
kind: Secret
metadata:
  name: eventing-webhook-auth
  namespace: kyma-system
type: Opaque
data:
  client_id: <client_id>
  client_secret: <client_secret>
  token_url: "https://<tenant>.accounts.ondemand.com/oauth2/token"
  certs_url: "https://<tenant>.accounts.ondemand.com/oauth2/certs"

Name reference between resources

The Kyma CR, which creation is the trigger for the creation of the EventingAuth CR, uses the unique runtime ID of the managed Kyma runtime as name. This name is used as the name for the EventingAuth CR and the IAS application. In this way, the EventingAuth CR and the IAS application can be assigned to the specific managed runtime.

Resource Naming Constraints

The controller makes assumptions about the names used in the control plane cluster to read the correct resources. The assumptions are the following:

Design decisions

Handling of Rate Limiting calling IAS API

We didn't implement any rate limit handling, because the Rate Limiting documentation of IAS mentions the following:

To ensure safe and stable environment, all requests have a limit of 50 concurrent requests per second. The requests are associated with the originating IP address, and not with the user making the requests.

Currently, we do not expect to exceed this rate limit since a reconciliation can perform a maximum of 5 sequential requests.
There is also mention of a specific rate limit for SCIM endpoints, but we do not use these endpoints.

Caching of well-known token endpoint

We read the known configuration of the IAS tenant that is used to create the applications to obtain the token endpoint. This token endpoint is then stored in the secret on the managed runtime along with the client ID and the client secret.
The assumption is, that the token endpoint of the IAS tenant does not change without any notice of a breaking change. To reduce the number of requests when creating an application client secret and thus increase the stability of the reconciliation, it was decided to cache the token endpoint on the first retrieval. The cached token endpoint is not invalidated during operator runtime, but is updated when the IAS credentials or tenant URL are changed.

Referencing IAS applications by name

The IAS application is created with a name that matches the name of the EventingAuth CR. This name is the unique runtime ID of the cluster for which the IAS application is created. Since we do not want to store the IAS application ID in the secret stored on the managed runtime , we can read the IAS application only by its name.
During the creation of the application, existing applications with the same name are read. If an application with the same name exists, it is deleted, as we assume this is due to a failed reconciliation. If more than one application with the same name already exists, the reconciliation fails. The same behaviour occurs when reconciling the deletion of the EventingAuth CR.

It was decided not to delete any of the existing applications in this case, as it is an unexpected condition that may have been caused by manual actions, and we may want to keep the applications to find the cause of the issue.

Handling of failed IAS application and secret creation

If the creation of the IAS application fails, the reconciliation will be retried. If an application has already been created, it is deleted before creation is attempted again. To avoid having multiple applications with the same name, the application is created again only if the deletion is successful. During the application creation process, there are several steps that can fail. First, the application is created, then the client secret is created, and finally the client ID of the client secret is read.
It was decided to always delete the application if any of these steps fail, as this makes the whole process more understandable and easier to maintain.
The reason for this is that the existing application can only be reused if the reconciliation failed before the client secret was successfully created, as we have no way to retrieve the client secret the next time the reconciliation is performed.

Additionally, if the creation of the secret on the managed runtime fails, we retrieve the created IAS application from the memory instead of recreating it in the IAS.

Future Improvements

Generating the SAP Cloud Identity Services API client

The OpenAPI specification is available in the API Business Hub. The specification used to generate the client is stored in internal/ias/internal/api/SCI_Application_Directory.yaml.

To generate the client and client mocks from the specification, run the following command:

NOTE: To generate the mocks you need to have mockery installed.

make gen-ias-client

Getting Started

You’ll need a Kubernetes cluster to run against. You can use KIND to get a local cluster for testing, or run against a remote cluster. Note: Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster kubectl cluster-info shows).

Preparing the clusters for local development

To run the controller locally, you need to have two clusters available. One cluster is used to run the controller, and the other cluster is used as a target for the created secret.

Prepare the cluster that is used to run the controller:

  1. Create the namespace to deploy the manager and the IAS credential secret:
kubectl create ns kcp-system
  1. Create the secret for the IAS credentials specified by TEST_EVENTING_AUTH_IAS_USER, TEST_EVENTING_AUTH_IAS_PASSWORD and TEST_EVENTING_AUTH_IAS_URL:
kubectl create secret generic eventing-auth-ias-creds -n kcp-system --from-literal=username=$TEST_EVENTING_AUTH_IAS_USER --from-literal=password=$TEST_EVENTING_AUTH_IAS_PASSWORD --from-literal=url=$TEST_EVENTING_AUTH_IAS_URL
  1. Create the secret containing the kubeconfig of the cluster on which the "eventing-webhook-auth" secret is created by specifying PATH_TO_TARGET_CLUSTER_KUBECONFIG and KYMA_CR_NAME:
kubectl create secret generic kubeconfig-$KYMA_CR_NAME -n kcp-system --from-file=config=$PATH_TO_TARGET_CLUSTER_KUBECONFIG

Prepare the target cluster

Create the namespace in which the "eventing-webhook-auth" secret is created on the target cluster:

kubectl create ns kyma-system

Running on the cluster

  1. Install the Kyma and EventingAuth CRDs:

    make install
  2. Update name of Custom Resource in config/samples/operator_v1beta2_kyma.yaml to contain the name of the kubeconfig secret created in Preparing the clusters. The Kyma CR name is equal to target K8s cluster name too.

  3. Install Instances of Custom Resources:

kubectl apply -f config/samples/
  1. Build and push your image to the location specified by IMG:
make docker-build docker-push IMG=<some-registry>/eventing-auth-manager:tag
  1. Deploy the controller to the cluster with the image specified by IMG:
make deploy IMG=<some-registry>/eventing-auth-manager:tag

Uninstall CRDs

To delete the CRDs from the cluster:

make uninstall

Undeploy controller

UnDeploy the controller from the cluster:

make undeploy

Configuring integration tests

The tests can be executed by running the following command:

make test

IAS stub

Per default the integration tests use a stub for the IAS API. This stub will return It's also possible to use the real IAS API by setting all the following environment variables:

export TEST_EVENTING_AUTH_IAS_URL=https://my-tenant.accounts.ondemand.com
export TEST_EVENTING_AUTH_IAS_USER=user@sap.com
export TEST_EVENTING_AUTH_IAS_PASSWORD=password

Target Cluster

Per default the integration tests use a local control plane created by envtest. It's also possible to use a real target cluster by setting following environment variable:

# The path to the kubeconfig of the cluster
export TEST_EVENTING_AUTH_TARGET_KUBECONFIG_PATH=/some/path/.kube/config

Contributing

See the Contributing Rules.

How it works

This project aims to follow the Kubernetes Operator pattern.

It uses Controllers, which provide a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.

Test It Out

  1. Install the CRDs into the cluster:
make install
  1. Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
make run

NOTE: You can also run this in one step by running: make install run

Modifying the API definitions

If you are editing the API definitions, generate the manifests such as CRs or CRDs using:

make manifests

NOTE: Run make --help for more information on all potential make targets

More information can be found via the Kubebuilder Documentation

Code of Conduct

See the Code of Conduct document.

Licensing

See the license file.