Open kunmingg opened 5 years ago
@kunmingg: The label(s) kind/feature
cannot be applied, because the repository doesn't have them.
Issue Label Bot is not confident enough to auto-label this issue. See dashboard for more details.
/cc @nrchakradhar
@kunmingg could you elaborate on what the goal of this issue is and what work needs to be completed?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
/remove-lifecyle stale /lifecycle frozen
Was looking into this doc https://docs.google.com/document/d/1C3J30zHDh_qxqbb-HAfmk6gs-s4QcF26nAQMo9AKb_4/edit. Not sure if the groups defined in this document
are followed or not in 1.0.1 release!. Here is what I see this working well in organizations to have a
with three sub groups as defined above for each project team operating in an ML platform. These groups are defined at the Active Directory level and each project group will get its own namespace (profile) and all users who belong to this top level group or subgroup will get privileges accordingly as defined in the document. Passing the Group information through JWT tokens can be controlled at the IAM policy level. Project group namespace(profile) creation (auto/ manual) need little more thought.
I would also propose we provide an option to enable / disable for automatic profile creation for individual users logging into the Kubeflow central dashboard 1.0.1 with out any group. In an enterprise setting enabling automatic individual profiles may not work if every one who can SSO into the Kubeflow central dashboard gets a new namespace through profiles.
It's better to discuss these in a meeting setting than one sided.
Thanks @lalithvaka for your input on this issue.
We have to look for a solution to consume the 'group' claim in the OIDC spec for auth. This will require a few changes in applications defined in Kubeflow, which include:
Apart from these, we might have to adapt kubeflow-roles to include specific groups for enabling or disabling permissions.
At minimum we need group support for both k8s rbac and istio rbac.
On GCP:
Issue-Label Bot is automatically applying the labels:
Label | Probability |
---|---|
area/centraldashboard | 0.95 |
Please mark this comment with :thumbsup: or :thumbsdown: to give our bot feedback! Links: app homepage, dashboard and code for this bot.
Issue-Label Bot is automatically applying the labels:
Label | Probability |
---|---|
area/centraldashboard | 0.95 |
Please mark this comment with :thumbsup: or :thumbsdown: to give our bot feedback! Links: app homepage, dashboard and code for this bot.
I am also waiting for this "to enable multi tenancy working with groups" implementation, is that expected in next release of kubeflow..?
Any updates on this?
I'd like to have groups similar to the @lalithvaka where we have a couple of high level groups and users associated into one of them. This group information is already in our OIDC provider and managed there. Based on the latest Kubeflow it looks like this would have to be done within Kubeflow which duplicates the data.
I've spent a few hours trying to get a sense of what is missing to allow the creation of group authentication based on the groups
claim in the OIDC token. After crafting some hard-coded Profiles, RoleBindings and AuthorizationPolicies, this is what I could find so far.
kubeflow-groups
header to the requests so AuthorizationPolicies should be able to use that header for authorizationuser
annotation (e.g. namespaceAdmin RoleBinding), but I couldn't find the code line
- The UI seems to add items to the namespace drop-down based on RoleBinding
user
annotation (e.g. namespaceAdmin RoleBinding), but I couldn't find the code line
@maxdebayser I believe it is here. It appears the code traverses all role bindings in all namespaces looking for one where the subject is bound to a list of hardcoded clusterroles.
If a user has a matching rolebinding reference in a given namespace, then they have access to that namespace.
@lalithvaka Is there a reason for the constraint of having only a single subject be supported? We have a system for manually importing user lists from our LDAP-ish system into rolebindings, and a simple for loop here looks like it would cut down on the number of rolebinding objects in kubernetes we need to make this possible. This would allow us to remove that annotation check as well, since now multiple user's can be bound to kubeflow-edit clusterrole in the same rolebinding.
Hum, so basically whoever calls GET /kfam/v1/bindings
would have to pass the content of the kubeflow-groups
header in the query string and then the code could look for roleBinding.Annotations[GROUP]
.
The kubeflow-user
header is read here and then propagated: https://github.com/kubeflow/kubeflow/blob/2fa0d3665234125aeb8cebe8fe44f0a5a50791c5/components/centraldashboard/app/attach_user_middleware.ts#L18
So it seems to me that kubernetes RBAC already has a fairly functional and broadly adopted system for group-based permissions management with roles (a.k.a. group permission sets) and rolebindings (group member lists). Its a tiny amount of code running in a cronjob to import these member lists from any given system (in our case Okta) and these systems are extremely varied. The big upside here is that we still only need a kubeflow-user
header. For more common identity providers, some such importers are already availaible.
OTOH, it's quite a bit of code, and indeed in our case impossible due to the number of groups folks tend to be members of, to add all of the groups a person might be in into a single header entry. This despite only a few of those groups being relevant at all to a given kubeflow cluster. We could solve this hackily by rigorously adhering to a naming convention for per-cluster pertinent groups, and then adding some Lua code logic in an istio filter to only add those groups to a kubeflow-groups
header, but even this breaks down at some point because the number of groups our identity provider will attach in a bearer-token is also capped.
Long story short, I think we should look at using native kubernetes RBAC as the source of truth for group permission sets and group member lists. Thoughts?
I think that makes sense. I was trying to avoid the IdP provider dependency, but using the groups claim might not scale to lots of groups, as you said. Changing the code to handle a new "group" RoleBinding annotation would be a pretty minimal change.
The CR for the importer would have to be added a newly created namespace, which probably can be added to sync.py
.
But there is another hurdle which is the AuthorizationPolicy. In every new namespace an AuthorizationPolicy like the one below is generated:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
annotations:
role: admin
user: user@email.com
name: user-edit-shared
namespace: shared
spec:
rules:
- when:
- key: request.headers[kubeflow-userid]
values:
- user@email.com
Multiple users could be added to the values
field by the group importer, but we would have to make sure that field is not overwritten by some kubeflow controller loop (if there is one)
There is an AuthorizationPolicy that is created by the kfam controller based on Profile CRD objects. But we can just use a different one so it doesn't get overwritten. That's actually exactly what our system is already doing. Near as I can tell, only istio cares about the auth policy and it merges them just fine.
@lalithvaka So, this is what I had in mind for namespace availability. The List method I linked to above would look like this:
func (c *BindingClient) List(user string, namespaces []string, role string) (*BindingEntries, error) {
entries := &BindingEntries{}
for _, ns := range namespaces {
roleBindings, err := c.roleBindingLister.RoleBindings(ns).List(labels.Everything())
if err != nil {
return nil, err
}
for _, roleBinding := range roleBindings {
mappedName, validName := roleBindingNameMap[roleBinding.RoleRef.Name]
if !validName {
continue
}
if mappedName != role && roleBinding.RoleRef.Name != role { // either is considered a match
continue
}
for _, sub := range roleBinding.Subjects {
if sub.Kind != "User" || sub.APIGroup != "rbac.authorization.k8s.io" || sub.Name != user {
continue
}
entries.Bindings = append(entries.Bindings, Binding{
User: &sub,
ReferredNamespace: ns,
RoleRef: &rbacv1.RoleRef{
Kind: roleBinding.RoleRef.Kind,
Name: roleBindingNameMap[roleBinding.RoleRef.Name],
},
})
}
}
}
return entries, nil
}
A little bit late to the party here, @cody-yancey thank you for raising this issue in the Notebook WG meeting!
Getting up to speed with this, so I'll post some more concrete next steps in the next days. Some first thoughts I have on this:
Again, as a first step I would really want us to move away from KFAM and have a more K8s native approach to having access to a namespace.
@kimwnasptd about the second point of supporting groups in istio/AuthorizationPolicies:
We have a production Kubeflow environment that uses oauth2-proxy instead of oidc-authservice. Then oauth2-proxy can we very well and easily integrated with dex and istio. This together provides the functionality to declare AuthorizationPolicy simar to:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: audience
spec:
action: ALLOW
rules:
- when:
- key: request.auth.claims[iss]
values: ["https://example.com/dex"]
- key: request.auth.claims[groups]
values:
- groupname1
- groupname2
I've spent a some time working this out and I could provide some help in this area. It works very nice in our instance.
It's also very nice in the user header area where we just changed the kubeflow-userid
header to x-auth-request-email
and got rid of EnvoyFilters to manage them. oauth2-proxy provides the header OOB, just have to allow through istio.
@kimwnasptd about the second point of supporting groups in istio/AuthorizationPolicies:
We have a production Kubeflow environment that uses oauth2-proxy instead of oidc-authservice. Then oauth2-proxy can we very well and easily integrated with dex and istio. This together provides the functionality to declare AuthorizationPolicy simar to:
apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: audience spec: action: ALLOW rules: - when: - key: request.auth.claims[iss] values: ["https://example.com/dex"] - key: request.auth.claims[groups] values: - groupname1 - groupname2
I've spent a some time working this out and I could provide some help in this area. It works very nice in our instance.
It's also very nice in the user header area where we just changed the
kubeflow-userid
header tox-auth-request-email
and got rid of EnvoyFilters to manage them. oauth2-proxy provides the header OOB, just have to allow through istio.
@kromanow94 Could you please share any reference document for the same if possible.
@ananth-sabhapathi please see:
Thanks Much
For those following up this issue, there are updates going on in https://github.com/kubeflow/manifests/issues/2910
/transfer dashboard
/kind feature
Exploring necessary components to enable multi tenancy working with groups.
Describe the solution you'd like: [A clear and concise description of what you want to happen.]
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]