Closed gcapizzi closed 3 years ago
Using this document to capture details
We managed to parse most of the roles and their permissions from the cloud_controller_ng
docs sources using this script
Some details are outlined in the accompanying README.md
You need to be a admin, admin-read-only or space developer to access env vars for an app. We believe the plan is to use secrets to store the env vars, so these roles would need access to k8s secrets to GET and LIST them. However, the secrets in a certain workload namespace would not just be env vars. There would be other secrets such as registry credentials that should not be shared with a space developer. So a simple RBAC rule for this roles on secrets is not sufficient.
There is a role property resourceNames
to restrict permissions to a list of named objects, so it is theoretically possible to give access only to app env vars. However, whenever an app env var is created or deleted, the role would have to be updated to ensure the name list matches the env vars currently associated with apps. This doesn't feel like a great thing to be doing. It could easily get out of sync, and the list could get large. Also it does not appear possible to restrict the output of LIST using resourceNames. You either get all or none.
OPA is not an option as it only acts of mutations (create, update, delete), given that it is implemented using a validating or mutating webhook.
Admin, Space Developer and Space Supporter (experimental) roles can set a current droplet on a app. In the cf-on-k8s world, this would mean setting the current app build image to one previously built by a Build object. The Space Supporter role does not however have permission to update an app, which entails modifying labels, annotations and/or lifecycle. This might be an oversight, or the Space Supporter role might not be required for cf-on-k8s, but if it is required as stated, then the current droplet property must be separated into another object if we require using only the coarse build-in RBAC model.
Creating small association objects like this could become quite pervasive. It's not a idiomatic k8s thing to be doing, and could cause problems with performance and database consistency.
If we use custom hooks, or OPA, it is simple to restrict setting a particular property based on roles. So this might be an indication that OPA is required on top of basic RBAC.
List Apps retrieves all apps the user has access to. If the user has a global role like admin or admin-read-only, this would be apps from all orgs and spaces. Or if the user only has org or space roles, the list would be restricted by orgs and spaces.
With our assumption of org <-> cluster and space <-> namespace, this is achieved with RBAC rules on listing apps either cluster or namespace scoped.
For a kubectl user, this works well. They can either list apps globally, or in a particular namespace, or get a permission denied error. For the shim, it's more awkward. If the user only has namespaced permissions to list apps, a cluster scoped list will fail. So the code must iterate through namespaces performing the list in each one. This might be inefficient, and also requires being able to work out which namespaces the user can list apps in, to avoid trying all namespaces. That would imply access to role bindings, and knowledge of which role gives access to list apps. The latter is particularly uncomfortable tying the shim code to the name of a role.
So this illustrates a downside of relying on k8s authorisation inside the shim code, and running API calls as the calling user.
Note that although the CLI restricts listing apps to the current org and space, the API list apps across orgs and spaces, and the shim is acting as the API.
This seems to be an extensible way of enabling things on an app, but there are only two things: ssh access and revisions. I would suggest these to be two boolean fields on App instead.
In this case, the Space Supporter role cannot update ssh access. That would need to be an OPA rule. If Space Supported is not supported, plain RBAC is fine.
We are working on what basic RBAC would look like for the role, focussing on the CRDs present in the parallel exploration. See here
We're stopping after writing example RBAC roles for app, package, build, droplet, process, and task.
Mainly, RBAC is a good fit. But there are several important issues:
resourceNames
property on role could allow access to a list of secret names. The difficulty is keeping this list in sync as apps and env var secrets are created and deleted.command
on a Task cannot be read apart from by Admin and Space Developer roles. Also Processes and Droplets have certain 'redacted' fields. There is no way to do this with the k8s API without separating those fields which might be redacted into one or more distinct resources. As mentioned above, we think this would be a poor k8s design, making kubectl using clunky (e.g. Space Developer must create a Task object and then wait for a TaskCommand object to appear and then update the command property on it, which is not possible in a declarative flow).To summarise:
resourceNames
role propertySee comments in the rbac.yml for roles and resources with problems.
It feels like with CF objects represented in k8s CRs we cannot hide parts of objects (such as task commands) from certain users. Can we drop this requirement for cf-on-k8s?
Aggregated cluster roles provide a way of dynamically aggregating cluster roles by selectors. This provides a nicer way to deal with selective access to secrets, for example secrets associated with app env vars.
For example, we could have an env-var-secret-access
cluster role, which we could use to group individual cluster roles per env var secret. And a space developer role binding in the appropriate namespace would bind a user to the aggregating cluster role. This gives the desired effect of allowing access to only the env var secrets in the particular namespace.
E.g. aggregating cluster role:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: env-var-secret-access
aggregationRule:
clusterRoleSelectors:
- matchLabels:
rbac.example.com/aggregate-to-env-var-secret-access: "true"
rules: [] # The control plane automatically fills in the rules
Individual secret cluster role:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: access-secret-1
labels:
rbac.example.com/aggregate-to-env-var-secret-access: "true"
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["secret1"]
verbs: ["get", "list", "watch"]
Space Developer user role binding (namespaced):
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: alice-space-dev
namespace: test
subjects:
- kind: User
name: oidc:alice
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: env-var-secret-access
Security Groups can be globally enabled or space-scoped. Globally enabled security groups can be seen by any space user. Space-scoped security groups can be seen by users with permissions on the space, including global roles.
So globally enabled security groups must be cluster wide resources. Space scoped security groups probably have to be namespaced. So we need two separate CRDs for global and space-scoped security groups.
Similarly, it seems Service Brokers and their Service Offerings and Plans can be global or space-scoped. So would need the same treatment.
Example rbac.yml
Background
Currently, CF on VMs relies on a set of roles for authorization, each role having specific permissions. Role permissions are hardcoded and information about which user has which role is stored in the Cloud Controller database.
The Kubernetes way of handling authorization is quite different: the most common method is Role-based Access Control (RBAC). Permissions are expressed as the possibility of performing standard operations (verbs, like
get
,watch
orlist
) on resources. These permissions are stored in(Cluster)?Role
s and bound to users via(Cluster)?RoleBinding
s.Questions
Assuming:
ClusterRole
s and Space roles toRole
s)Then: