This product is no longer actively maintained, but thank you for all those who have used it! We have archived the repo to provide clear guidance on current expectations.
Kubernetes is designed as a single-tenant platform, which makes it hard for cluster admins to host multiple tenants in a single Kubernetes cluster. However, sharing a cluster has many advantages, e.g. more efficient resource utilization, less admin/configuration effort or easier sharing of cluster-internal resources among different tenants.
While there are hundreds of ways of setting up multi-tenant Kubernetes clusters and many Kubernetes distributions provide their own tenancy logic, there is no lightweight, pluggable and customizable solution that allows admins to easily add multi-tenancy capabilities to any standard Kubernetes cluster.
kiosk is designed to be:
The core idea of kiosk is to use Kubernetes namespaces as isolated workspaces where tenant applications can run isolated from each other. To minimize admin overhead, cluster admins are supposed to configure kiosk which then becomes a self-service system for provisioning Kubernetes namespaces for tenants.
The following diagram shows the main actors (Cluster Admins and Account Users) as well as the most relevant Kubernetes resources and their relationships.
Click on the following links to view the description for each of the actors and kiosk components:
When installing kiosk in a Kubernetes cluster, these components will be added to the cluster:
kiosk adds two groups of resources to extend the Standard API Groups of Kubernetes:
Custom Resources: config.kiosk.sh
Custom Resource Definitions (CRDs) for configuring kiosk. These resources are persisted in etcd just like any other Kubernetes resources and are managed by an operator which runs inside the cluster.
API Extension: tenancy.kiosk.sh
Virtual resources which are accessible via an API Server Extension and will not be persisted in etcd. These resources are similar to views in a relational database. The benefit of providing these resources instead of only using CRDs is that we can calculate access permissions dynamically for every request. That means that it does not only allow to list, edit and manage Spaces (which map 1-to-1 to Namespaces), it also allows to show a different set of Spaces for different Account Users depending on the Accounts they are associated with or in other words: this circumvents the current limitation of Kubernetes to show filtered lists of cluster-scoped resources based on access rights.
kubectl
: Follow this guide to install it.helm
version 3: Follow this guide to install it.kiosk supports Kubernetes version: v1.14 and higher. Use kubectl version
to determine the Server Version
of your cluster. While this getting started guide should work with most Kubernetes clusters out-of-the-box, there are certain things to consider for the following types of clusters:
You need a kube-context with admin rights.
If running all the following commands returns yes
, you are most likely admin:
kubectl auth can-i "*" "*" --all-namespaces
kubectl auth can-i "*" namespace
kubectl auth can-i "*" clusterrole
kubectl auth can-i "*" crd
# Install kiosk with helm v3
kubectl create namespace kiosk
helm install kiosk --repo https://charts.devspace.sh/ kiosk --namespace kiosk --atomic
To verify the installation make sure the kiosk pod is running:
$ kubectl get pod -n kiosk
NAME READY STATUS RESTARTS AGE
kiosk-58887d6cf6-nm4qc 2/2 Running 0 1h
In the following steps, we will use Kubernetes user impersonation to allow you to quickly switch between cluster admin and simple account user roles. If you are cluster admin and you want to run a kubectl
command as a different user, you can impersonate this user by adding the kubectl
flags --as=[USER]
and/or --as-group=[GROUP]
.
In this getting started guide, we assume two user roles:
kubectl
commands without --as
flag)john
: use your admin-context to impersonate a user (kubectl
commands with --as=john
)If you are using Digital Ocean Kubernetes (DOKS), follow this guide to simulate a user using a Service Account.
To allow a user to create and manage namespaces, they need a kiosk account. Run the following command to create such an account for our example user john
:
# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account.yaml
# Alternative: ServiceAccount as Account User (see explanation for account-sa.yaml below)
# kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account-sa.yaml
Learn more about User Management and Accounts in kiosk.
All Account Users are able to view their Account through their generated ClusterRole. Let's try this by impersonating john
:
# View your own accounts as regular account user
kubectl get accounts --as=john
# View the details of one of your accounts as regular account user
kubectl get account johns-account -o yaml --as=john
Spaces are the virtual representation of namespaces. Each Space represents exactly one namespace. The reason why we use Spaces is that by introducing this virtual resource, we can allow users to only operate on a subset of namespaces they have access to and hide other namespaces they shouldn't see.
By default, Account Users cannot create Spaces themselves. They can only use the Spaces/Namespaces that belong to their Accounts. That means a cluster admin would need to create the Spaces for an Account and then the Account Users could work with these Spaces/Namespaces.
To allow all Account Users to create Spaces for their own Accounts, create the following RBAC ClusterRoleBinding:
# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/rbac-creator.yaml
After granting Account Users the right to create Spaces for their Accounts (see ClusterRoleBinding in 3.1.), all Account Users are able to create Spaces. Let's try this by impersonating john
:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/space.yaml --as=john
Let's take a look at the Spaces of the Accounts that User john
owns by impersonating this user:
# List all Spaces as john:
kubectl get spaces --as=john
# Get the defails of one of john's Spaces:
kubectl get space johns-space -o yaml --as=john
Every Space is the virtual representation of a regular Kubernetes Namespace. That means we can use the associated Namespace of our Spaces just like any other Namespace.
Let's impersonate john
again and create an nginx deployment inside johns-space
:
kubectl apply -n johns-space --as=john -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/deployment.yaml
That's great, right? A user that did not have any access to the Kubernetes cluster, is now able to create Namespaces on-demand and gets restricted access to these Namespaces automatically.
To allow Account Users to delete all Spaces/Namespace that they create, you need to set the spec.space.clusterRole
field in the Account to kiosk-space-admin
.
When creating a Space, kiosk creates the according Namespace for the Space and then creates a RoleBinding within this Namespace which binds the standard Kubernetes ClusterRole
admin
to every Account User (i.e. allsubjects
listed in the Account). While this ClusterRole allows full access to this Namespace, it does not allow to delete the Space/Namespace. (The verbdelete
is missing in the default admin clusterrole)
As john
can be User of multiple Accounts, let's create a second Account which allows john
to delete Spaces/Namespaces that belong to this Account:
# Run this as cluster admin:
# Create Account johns-account-deletable-spaces
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account-deletable-spaces.yaml
If you are using ServiceAccounts instead of impersonation, adjust the
subjects
section of this Account similar toaccount-sa.yaml
in 2.1.
Now, let's create a Space for this Account:
# Run this as john:
# Create Space johns-space-deletable
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/space-deletable.yaml --as=john
If a Space belongs to an Account that allows Account Users to delete such Spaces, an Account User can simply delete the Space using kubectl:
kubectl get spaces --as=john
kubectl delete space johns-space-deletable --as=john
kubectl get spaces --as=john
Deleting a Space also deletes the underlying Namespace.
kiosk provides the spec.space.spaceTemplate
option for Accounts which lets admins define defaults for new Spaces of an Account. The following example creates the Account account-default-space-metadata
which defines default labels and annotations for all Spaces created with this Account:
# Run this as cluster admin:
# Create Account johns-account-default-space-metadata
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account-default-space-metadata.yaml
With kiosk, you have two options to limit Accounts:
By setting the spec.space.limit
in an Account, Cluster Admins can limit the number of Spaces that Account Users can create for a certain Account.
Let's run the following command to update the existing Account johns-account
and specify spec.space.limit: 2
:
# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account-space-limit.yaml
Now, let's try to create more than 2 Spaces (note that you may have already created a Space for this Account during earlier steps of this getting started guide):
# List existing spaces:
kubectl get spaces --as=john
# Create space-2 => should work if you had only one Space for this Account so far
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/space-2.yaml --as=john
# Create space-3 => should result in an error
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/space-3.yaml --as=john
AccountQuotas allow you to define limits for an Account which are aggregated across all Spaces of this Account.
Let's create an AccountQuota for johns-account
which will set the aggregated number of Pods across all Spaces to 2 and the aggregated maximum of limits.cpu
across all Pods in all Spaces to 4 CPU Cores (see Kubernetes resource limits):
# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/accountquota.yaml
Templates in kiosk are used to initialize Namespaces and share common resources across namespaces (e.g. secrets). When creating a Space, kiosk will use these Templates to populate the newly created Namespace for this Space. Templates:
${NAMESPACE}
or ${MY_PARAMETER}
that can be specified within an TemplateInstanceThe easiest option to define a Template is by specifying an array of Kubernetes manifests which should be applied when the Template is being instantiated.
The following command will create a Template called space-restrictions
which defined 2 manifests, a Network Policy which will make sure that the users of this Space/Namespace cannot create privileged containers and a LimitRange for default CPU limits of containers in this Namespace:
# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/template-manifests.yaml
Instead of manifests, a Template can specify a Helm chart that will be installed (using helm template
) when the Template is being instantiated. Let's create a Template called redis
which installs the stable/redis
Helm chart:
# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/template-helm.yaml
By default, only admins can list Templates. To allow users to view templates, you need to set up RBAC accordingly. Run the following code to allow every cluster user to list and view all Templates:
# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/rbac-template-viewer.yaml
To view a list of available Templates, run the following command:
kubectl get templates --as=john
To instantiate a Template, users need to have permission to create TemplateInstances within their Namespaces. You can grant this permission by running this command:
# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/rbac-template-instance-admin.yaml
Note: Creating a TemplateInstance in a Space is only possible if a RoleBinding exists that binds the Role kiosk-template-admin
to the user. Because kiosk-template-admin
has the label rbac.kiosk.sh/aggregate-to-space-admin: "true"
(see rbac-instance-admin.yaml
below), it is also possible to create a RoleBinding for the Role kiosk-space-admin
(which automatically includes kiosk-template-admin
).
After creating the ClusterRole kiosk-template-admin
as shown above, users can instantiate templates inside their Namespaces by creating so-called TemplateInstances. The following example creates an instance of the Helm Chart Template redis
which has been created above:
kubectl apply --as=john -n space-2 -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/template-instance.yaml
Note: In the above example, we are using space-2
which belongs to Account johns-account-deletable-spaces
. This Account defines space.clusterRole: kiosk-space-admin
which automatically creates a RoleBinding for the Role kiosk-space-admin
when creating a new Space for this Account.
Templates can either be mandatory or optional. By default, all Templates are optional. Cluster Admins can make Templates mandatory by adding them to the spec.space.templateInstances
array within the Account configuration. All Templates listed in spec.space.templateInstances
will always be instantiated within every Space/Namespace that is created for the respective Account.
Let's see this in action by updating the Account johns-account
and referencing our space-restrictions
Template from 5.1. in spec.space.templateInstances
:
# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account-default-template.yaml
Now, let's create a Space without specifying any templates and see how this Template will automatically be instantiated:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/space-template-mandatory.yaml --as=john
Now, we can run the following command to see that the two resources (PodSecurityPolicy and LimitRange) defined in our Template space-restrictions
have been created inside the Space/Namespace:
# Run this as cluster admin:
kubectl get podsecuritypolicy,limitrange -n johns-space-template-mandatory
Mandatory Templates are generally used to enforce security restrictions and isolate namespaces from each other while Optional Templates often provide a set of default applications that a user might want to choose from when creating a Space/Namespace (see example in 5.2).
To keep track of resources created from Templates, kiosk creates a so-called TemplateInstance for each Template that is being instantiated inside a Space/Namespace.
To view the TemplateInstances of the namespace johns-space-template-mandatory
, run the following command:
# Run this as cluster admin:
kubectl get templateinstances -n johns-space-template-mandatory
TemplateInstances allow admins and user to see which Templates are being used within a Space/Namespace and they make it possible to upgrade the resources created by a Template if there is a newer version of the Template (coming soon).
Generally, a TemplateInstance is created from a Template and then, the TemplateInstances will not be updated when the Template changes later on. To change this behavior, it is possible to set spec.sync: true
in a TemplateInstance. Setting this option, tells kiosk to keep this TemplateInstance in sync with the underlying template using a 3-way merge (similar to helm upgrade
).
The following example creates an instance of the Helm Chart Template redis
which has been created above and defines that this TemplateInstance should be kept in sync with the underlying Template:
kubectl apply --as=john -n space-2 -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/template-instance-sync.yaml
helm upgrade kiosk --repo https://charts.devspace.sh/ kiosk -n kiosk --atomic --reuse-values
Check the release notes for details on how to upgrade to a specific release.
Do not skip releases with release notes containing upgrade instructions!
helm delete kiosk -n kiosk
kiosk does not provide a built-in user management system.
To manage users in your cluster, you can either use vendor-neutral solutions such as dex or DevSpace Cloud or alternatively, if you are in a public cloud, you may be able to use provider-specific solutions such as AWS IAM for EKS or GCP IAM for GKE.
If you like to use ServiceAccounts for a small and easy to set up authentication and user management, you can use the following instructions to create new users / kube-configs.
Use
bash
to run the following commands.
USER_NAME="john"
kubectl -n kiosk create serviceaccount $USER_NAME
# If not already set, then:
USER_NAME="john"
KUBECONFIG_PATH="$HOME/.kube/config-kiosk"
kubectl config view --minify --raw >$KUBECONFIG_PATH
export KUBECONFIG=$KUBECONFIG_PATH
CURRENT_CONTEXT=$(kubectl config current-context)
kubectl config rename-context $CURRENT_CONTEXT kiosk-admin
CLUSTER_NAME=$(kubectl config view -o jsonpath="{.clusters[].name}")
ADMIN_USER=$(kubectl config view -o jsonpath="{.users[].name}")
SA_NAME=$(kubectl -n kiosk get serviceaccount $USER_NAME -o jsonpath="{.secrets[0].name}")
SA_TOKEN=$(kubectl -n kiosk get secret $SA_NAME -o jsonpath="{.data.token}" | base64 -d)
kubectl config set-credentials $USER_NAME --token=$SA_TOKEN
kubectl config set-context kiosk-user --cluster=$CLUSTER_NAME --user=$USER_NAME
kubectl config use-context kiosk-user
# Optional: delete admin context and user
kubectl config unset contexts.kiosk-admin
kubectl config unset users.$ADMIN_USER
export KUBECONFIG=""
# If not already set, then:
KUBECONFIG_PATH="$HOME/.kube/config-kiosk"
export KUBECONFIG=$KUBECONFIG_PATH
kubectl ...
export KUBECONFIG=""
kubectl ...
There are many ways to get involved:
For more detailed information, see our Contributing Guide.
This is a very new project, so we are actively looking for contributors and maintainers. Reach out if you are interested.
kiosk is an open-source project licensed under Apache-2.0 license. The project will be contributed to CNCF once it reaches the required level of popularity and maturity. The first version of kiosk was developed by DevSpace Technologies as core component for their DevSpace Cloud on-premise edition.