Closed saschagrunert closed 3 years ago
I think this would be easy to implement with a mutating admission webhook. The new ProfileBinding
kind would look something like:
kind: ProfileBinding
metadata:
name: nginx-seccomp-binding
spec:
podSelector:
app: nginx
containerName: test-container # optional
profile:
type: SeccompProfile
name: nginx-1.19.1
How I think it would work is:
SecurityContext.SeccompProfile
to the pod spec or to the container definition within the pod - if the pod doesn't already have one. At the same time, the controller adds a link to the pod to the binding, as well as a finalizer, in the same way that we currently do with profiles (#155) in order to prevent the binding from being deleted while it is in use.Before diving into this, I question whether this feature is truly necessary and useful. The seccomp profile path is predictable based on its namespace, workload attribute and name. It's also visible in the Status
of an existing profile. So I think it's pretty easy to create a new pod that uses an existing profile by looking up its path, and not too difficult to create a pod that uses a not-yet-existing profile by predicting what its path will be. What if we improved the ease of use by making the path more prominent, maybe by exposing it in the kubectl get seccompprofile -o wide
output, and maybe trimming the /var/lib
parts from it to make it more obvious what's supposed to go in the SecurityContext.SeccompProfile.LocalhostProfile
? Is there another reason to add a whole new type for this?
Before diving into this, I question whether this feature is truly necessary and useful.
I agree, let's iterate some thoughts about the overall solution approach (cc @hasheddan @rhafer @pjbgf).
What if we improved the ease of use by making the path more prominent, maybe by exposing it in the
kubectl get seccompprofile -o wide
output and maybe trimming the/var/lib
parts from it to make it more obvious what's supposed to go in theSecurityContext.SeccompProfile.LocalhostProfile
?
Yes, I think this would be valuable. How would this work technically?
Is there another reason to add a whole new type for this?
I guess let's focus on the main use case in making it easier and less error prone to reference localhost profiles. It seems overly complicated to me to add the domain-specific localhost path to a workload like operator/security-profiles-operator/default-profiles/nginx-1.19.1.json
. Having an easy copy-paste-like solution around it seems fine to me. Maybe we can provide a single line kubectl patch
command somehow? For example:
The patch might already restart the workload, which would be a win-win.
@saschagrunert commented on Dec 7, 2020, 1:02 AM PST:
What if we improved the ease of use by making the path more prominent, maybe by exposing it in the
kubectl get seccompprofile -o wide
output and maybe trimming the/var/lib
parts from it to make it more obvious what's supposed to go in theSecurityContext.SeccompProfile.LocalhostProfile
?Yes, I think this would be valuable. How would this work technically?
To expose the path in the kubectl -o wide format it looks like we can just modify the CRD to add an additional printer column and set its priority to >1: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#priority
We could either choose to do that to the existing Path
attribute and expose it as-is, or modify Path
's meaning to only include the trimmed-down portion that a user would put in their pod spec, or create a new attribute (Location
? Reference
?) that has just that string.
Having an easy copy-paste-like solution around it seems fine to me. Maybe we can provide a single line
kubectl patch
command somehow? For example:
- User created new security profile
- User gets immediately feedback how to patch their workloads based on a label selector
The patch might already restart the workload, which would be a win-win.
Great idea :+1:
We could either choose to do that to the existing
Path
attribute and expose it as-is, or modifyPath
's meaning to only include the trimmed-down portion that a user would put in their pod spec, or create a new attribute (Location
?Reference
?) that has just that string.
Sounds good :+1: We could reference the full yaml path to either the Pods or the Containers LocalhostProfile
. This way users directly know what we mean.
Having an easy copy-paste-like solution around it seems fine to me. Maybe we can provide a single line
kubectl patch
command somehow? For example:
- User created new security profile
- User gets immediately feedback how to patch their workloads based on a label selector
The patch might already restart the workload, which would be a win-win.
Great idea
Do you think it makes sense to add that to the mentioned printer column? I'm not sure if that would be too verbose.
@saschagrunert commented on Dec 7, 2020, 11:14 AM PST:
We could either choose to do that to the existing
Path
attribute and expose it as-is, or modifyPath
's meaning to only include the trimmed-down portion that a user would put in their pod spec, or create a new attribute (Location
?Reference
?) that has just that string.Sounds good We could reference the full yaml path to either the Pods or the Containers
LocalhostProfile
. This way users directly know what we mean.
Could you clarify what you mean by this, do you mean the name of the attribute should be this full yaml path? e.g. have the column name be "pod.spec.securityContext.seccompProfile.localhostProfile
"?
Having an easy copy-paste-like solution around it seems fine to me. Maybe we can provide a single line
kubectl patch
command somehow? For example:
- User created new security profile
- User gets immediately feedback how to patch their workloads based on a label selector
The patch might already restart the workload, which would be a win-win.
Great idea
Do you think it makes sense to add that to the mentioned printer column? I'm not sure if that would be too verbose.
IMO that would be a bit too verbose, if I understand what you're envisioning that would look like
$ kubectl get sp -o wide
NAME STATUS AGE KUBECTL_COMMAND
profile-allow Active 3d3h kubectl --namespace default patch deployment test --patch '{"spec": {"template": {"spec": {"securityContext": {"seccompProfile": {"localhostProfile": "operator/namespace/workload/profile.json", "type": "Localhost" }}}}}}'
It's also impossible to make generic to what the user wants to do, since the namespace, name, and resource type (deployment? replicaset? daemonset?) would depend on the user. An alternative could be to just include the --patch
command like
$ kubectl get sp -o wide
NAME STATUS AGE PATCH_COMMAND
profile-allow Active 3d3h --patch '{"spec": {"template": {"spec": {"securityContext": {"seccompProfile": {"localhostProfile": "operator/namespace/workload/profile.json", "type": "Localhost" }}}}}}'
I like that a little better but it's still really verbose and doesn't account for if the user wants to put it in the pod spec or the container spec. I think it would be sufficient to just put examples in the documentation.
Before diving into this, I question whether this feature is truly necessary and useful.
In my opinion, this is specially useful when auto applying profiles into public workloads (i.e. nginx
) - which is where the operator can add a lot of value.
I think this would be easy to implement with a mutating admission webhook. The new
ProfileBinding
kind
Brilliant idea, this could easily be used to keep on this github repo not only the CRDs but also the ProfileBindings of main workloads, which I think would be quite useful for the community to distribute their security profiles.
I guess let's focus on the main use case in making it easier and less error prone to reference localhost profiles. It seems overly complicated to me to add the domain-specific localhost path to a workload like
operator/security-profiles-operator/default-profiles/nginx-1.19.1.json
. Having an easy copy-paste-like solution around it seems fine to me.
Yep, I like the idea of providing an easy way to "patch" workloads, and also agree that the domain-specific path is not great. But would not the ProfileBinding
combined with the mutating admission webhook should suffice here? As @cmurphy mentioned, I think that maybe just some examples in documentation would be enough to plug the gap.
Could you clarify what you mean by this, do you mean the name of the attribute should be this full yaml path? e.g. have the column name be "
pod.spec.securityContext.seccompProfile.localhostProfile
"?
Yes this was the idea. But we could probably call it securityContext.seccompProfile.localhostProfile
since it can apply to all containers as well. Since the field explicitly scopes to seccomp profiles we could also just call it securityContextLocalhostProfile
or localhostProfile
(with additional docs about the field)…
I like that a little better but it's still really verbose and doesn't account for if the user wants to put it in the pod spec or the container spec. I think it would be sufficient to just put examples in the documentation.
Yep I agree, seems way to verbose to me and docs should be enough.
Since @pjbgf provided some useful thoughts on the webhook we still may consider going that path?
Yep, I like the idea of providing an easy way to "patch" workloads, and also agree that the domain-specific path is not great. But would not the ProfileBinding combined with the mutating admission webhook should suffice here? As @cmurphy mentioned, I think that maybe just some examples in documentation would be enough to plug the gap.
Hmm I'm not sure I understand what you're suggesting @pjbgf - my suggestion about the patch examples in the documentation was meant to be instead of the new webhook and CRD, not in addition. IMO a new admission webhook (which is a potential attack surface) and new CRD are a little heavy weight for this use case, whereas documented examples and better visibility of the profile path would be simple and cover most of the use case.
Yes this was the idea. But we could probably call it securityContext.seccompProfile.localhostProfile since it can apply to all containers as well. Since the field explicitly scopes to seccomp profiles we could also just call it securityContextLocalhostProfile or localhostProfile (with additional docs about the field)…
@saschagrunert makes sense :+1:
Hmm I'm not sure I understand what you're suggesting @pjbgf - my suggestion about the patch examples in the documentation was meant to be instead of the new webhook and CRD, not in addition. IMO a new admission webhook (which is a potential attack surface) and new CRD are a little heavy weight for this use case, whereas documented examples and better visibility of the profile path would be simple and cover most of the use case.
@cmurphy to be completely honest, first time I re-read my comments they made no sense to me either. :sweat_smile: I agree that creating a new CRD and webhook extends the attack surface, it could potentially be an optional component or sit behind a feature gate, so users have the choice on whether or not to use it. But as I mentioned on the meeting today, I do think it would add great value to be able to automatically set security profiles to public workloads (e.g. kube dashboard, kube-proxy
, etc).
What would you like to be added:
Target is to find a way to make applying profiles easier rather than specifying the
LocalhostPath
in the security context.Why is this needed:
Users may not know where the profile is actually saved on disk. We could implement a new resource like a
ProfileBinding
which uses a selector to mutate workloads and set the right path to the profile in it's annotation/securityContext.It is questionable how to handle deletions of those bindings. The annotations/fields for seccomp and AppArmor behave differently with respect to their mutability.