Open rgbriggs opened 6 years ago
Posted v3 kernel patchset upstream: https://www.redhat.com/archives/linux-audit/2018-June/msg00048.html https://lkml.org/lkml/2018/6/6/609
posted v4 kernel patchset upstream: https://www.redhat.com/archives/linux-audit/2018-July/msg00178.html https://lkml.org/lkml/2018/7/31/855
I have a question regarding the container id assignment mechanism. Curently the patch in the v4 posted series loosely defines a container by virtue of the fact that a nonce is assigned to it from user space. This is nice for a few reasons, primarily because it excuses the kernel from having to have any definition of what a container is (ostensibly we consider a container to be a unique collection of namespaces, and possibly cgroups, but the kernel wants no knowledge of that). It seems this implementation of container assignment suffers from a few shortcommings however: 1) There appears to be no mechanism that prevents a container from modifying its own id (presuming CAP_SYS_ADMIN is not removed from its capability set, which I think doesn't occur for trusted containers) 2) There appears to be no mechanism for preventing a container from changing namespaces once an id is set for it, meaning that the correlation between the id and whatever you want to define in userspace as being a 'container' is lost 3) There is no current mechanism that prevents multiple unique 'containers' from sharing an id
All of these problems are of course fixable in the current implementation, but for the sake of argument, I'd like to propose an alternate solution that may (I've not 100% thought it out yet) reduce the complexity of the code, make the semantics of the control of the container id from userspace more clear, and solve the above problems
To start, I'd like to define (from a userspace perspective strictly) what we see as a container:
A container is defined by user space as a process that have a specific collection of (namespaces and cgroups) AND does NOT have the abilty to enter new namespaces and cgroups
I would propose that we implement the following in the kernel to enforce that policy: 1) Create a new capability, CAP_AUDIT_SETNS, which gates the ability to successfully call the setns system call. This capability is inherited by its children. If CAP_AUDIT_SETNS is re-granted to a process, its contid (see 2, below), is reset to AUDIT_CID_UNSET).
2) Create a new field in the audit struct per task, contid (which exists in this patchset already). This field is assigned a nonce (likely using the ida_simple_get api) if the CAP_AUDIT_SETNS capability is dropped, which would also generate an audit log message (simmilar to the one generated in audit_set_contid)
3) If a process calls fork/clone with any of the CLONE_
I think this approach may have a few advantages: a) It provides a established mechanism (the capabilities subsystem), to provide a gated point at which the above container definition becomes locked from the perspective of the initial process inside the container. While the parent may re-grant the privlidge, it creates an environment whereby dropping the capability both locks the nature of what the container is (a unique set of namespaces and cgroups), and provides a unique id to the container for audit purposes
b) It removes the need to manage container id's from userspace, reducing the risk for errors in assigning duplicate identifiers, and reduces code size by removing the need to create a new proc file and validate the information passed through it.
c) It reduces complexity in the kernel code. The kernel can guarantee unique identifiers using the gating CAP_AUDIT_SETNS flag as the moment to generate the id.
This also (may) smooth upstream acceptace, because from a kernel standpoint, all we're doing is adding a capability to an established interface, and generating a nonce for the purposes of audit on the dropping of that capability. From user space its nice because we can consider the dropping of the ability to enter a new namespace as the border between a process being outside of, and inside of, a container.
We may have already gone down the path of the existing implementation to consider a change of this magnitude, but I wanted to bring this up before we were locked into a design
There appears to be no mechanism that prevents a container from modifying its own id (presuming CAP_SYS_ADMIN is not removed from its capability set, which I think doesn't occur for trusted containers)
I'm replying strictly from memory here, so I might have some of the minor details wrong, but managing the audit container ID should require CAP_AUDIT_ADMIN. The idea being that only container orchestrator processes would be granted this capability, not the individual containers themselves.
It gets slightly more confusing if you want to allow nested container orchestrators, but you're already dealing with other problems if you are going this route.
There appears to be no mechanism for preventing a container from changing namespaces once an id is set for it, meaning that the correlation between the id and whatever you want to define in userspace as being a 'container' is lost.
Once again, managing an audit container ID is gated by CAP_AUDIT_ADMIN so it is unlikely to be an issue. It is also worth noting that once the process spawns any children, or additional threads, you can't change the audit container ID.
There is no current mechanism that prevents multiple unique 'containers' from sharing an id.
Not in the kernel, that is correct. Like many things, this is something that is left to the container orchestrator.
One of the design constraints, if not the most important design constraint, was to avoid defining "container" in the context of the kernel with the audit container ID work. We defer all logic for setting, and managing the audit container ID to the userspace container orchestrator. In this first round of patches the kernel's only role here is to report the audit container ID as part of the audit event stream, and ensure the audit container ID is inherited properly for newly created threads/processes.
Later the kernel will add some intelligence for routing audit records based on the audit container ID, and allow multiple audit daemons to capture specific audit event streams, but even that will be carefully done so as to not define "container" in the kernel.
I'm replying strictly from memory here, so I might have some of the minor details wrong, but managing the audit container ID should require CAP_AUDIT_ADMIN. The idea being that only container orchestrator processes would be granted this capability, not the individual containers themselves. Right, I think its CAP_AUDIT_CONTROL, but no matter, and you are absolutely right, the container id is unwriteable by any process that doesn't have that capability. That said, I was less worried about a contained process changing its ID, and more worried about a contained process changing properties that an orchestrator might associate. i.e. a contained process may have its container id be fixed, but it could easily still call setns on it self, and enter the namespace of another process, breaking whatever association the orchestrator might have assumed would be established. My question was really an attempt to enforce that implied mapping by creating a capability control that could block entry to other processes namespaces in such a way that the orcestrator could preserve its notion of what a container was, without imbuing the kernel with any knowledge of containers (and hopefully at the same time, creating a trigger mechanism that the kernel audit code could use to auto-generate said ids). If you think thats too much information in the kernel, thats fine, but I wanted to ask the question.
It gets slightly more confusing if you want to allow nested container orchestrators, but you're already dealing with other problems if you are going this route. Yeah, I'm not super worried about that (though bifurcating capability control between setns and unshare might be useful here, at least for the purposes of this conversation)
Once again, managing an audit container ID is gated by CAP_AUDIT_ADMIN so it is unlikely to be an issue. It is also worth noting that once the process spawns any children, or additional threads, you can't change the audit container ID.
Yes, but its not changing the audit container id that I'm asking about, its changing the namespaces that a given set of processes with an immutable container id that I'm concerned with. Maybe it doesn't matter for the purposes of audit, but it seems like it should be. As an example, Process A is spawned by an orchestator, and assigned net namespace 1, and container id 10. If Process A then forks a child with CLONE_NEWNET set, creating process B, and it (process A) then calls setns(
On 2018-12-21 16:25, Neil Horman wrote: ...
It seems this implementation of container assignment suffers from a few shortcommings however: 1) There appears to be no mechanism that prevents a container from modifying its own id (presuming CAP_SYS_ADMIN is not removed from its capability set, which I think doesn't occur for trusted containers)
This was included in earlier versions, preventing a task from setting its own audit container ID (contid), but it was decided this was too restrictive, and it was desirable to allow a container orchestrator to set its own contid.
There were also other restrictions earlier that prevented a child contid being set if its parent contid was different, or a flag that indicated that inheritance, but those were similarly removed as too restrictive, leaving that management up to the orchestrator.
2) There appears to be no mechanism for preventing a container from changing namespaces once an id is set for it, meaning that the correlation between the id and whatever you want to define in userspace as being a 'container' is lost
Since the container is an arbitrary collection of namespaces, cgroups and seccomp and there is no universally agreed-upon definition, it was decided that the actual namespace membership wasn't in fact relevant. Any of that process' children will inherit its parent contid and any namespaces created would automatically become part of that container.
There is another issue open to track namespaces in audit. It was originally thought we wanted to track container activity by using a set of namespaces, but it became evident that this was complex, required too much network and disk bandwidth, and wasn't even reliable and complete. There is still value in tracking those namespaces, but it isn't going to solve the primary problem we are trying to solve. (see https://github.com/linux-audit/audit-kernel/issues/32)
3) There is no current mechanism that prevents multiple unique 'containers' from sharing an id
This problem was also solved in a previous bit of code, but it was decided this was an orchestrator management issue.
All of these problems are of course fixable in the current implementation, but for the sake of argument, I'd like to propose an alternate solution that may (I've not 100% thought it out yet) reduce the complexity of the code, make the semantics of the control of the container id from userspace more clear, and solve the above problems
To start, I'd like to define (from a userspace perspective strictly) what we see as a container:
A container is defined by user space as a process that have a specific collection of (namespaces and cgroups) AND does NOT have the abilty to enter new namespaces and cgroups
This second clause we had discussed and decided was too restrictive. Of course we want to restrict a process from moving itself to another container's namespace set, but this can already be done using namespace management tools. We saw no reason to restrict it from creating new namespaces and using them, and their children would all inherit their contid.
I would propose that we implement the following in the kernel to enforce that policy: 1) Create a new capability, CAP_AUDIT_SETNS, which gates the ability to successfully call the setns system call. This capability is inherited by its children. If CAP_AUDIT_SETNS is re-granted to a process, its contid (see 2, below), is reset to AUDIT_CID_UNSET).
I had already suggested using a new capability to gate the ability to set the contid, but we had received an objection that creating a new capability was unnecessary since it could be covered with CAP_AUDIT_CONTROL. This suggestion would allow a process that was previously confined to a container to essentially break out of it, which defeats the purpose.
2) Create a new field in the audit struct per task, contid (which exists in this patchset already). This field is assigned a nonce (likely using the ida_simple_get api) if the CAP_AUDIT_SETNS capability is dropped, which would also generate an audit log message (simmilar to the one generated in audit_set_contid)
Would this field replace it, or are you suggesting adding a field of a slightly different name?
An earlier proposal had used a kernel-assigned container serial number to ensure each new container had a unique ID, but this was rejected partly due to the need for the orchestrator to read back that new ID to learn what it was, the orchestrator lost the ability to use IDs that made sense to it, but also the inability for the orchestrator to add a new process to an existing container.
3) If a process calls fork/clone with any of the CLONE_
flag set, the CAP_AUDIT_SETNS capability is restored to the child process, and its contid value is reset to AUDIT_CID_UNSET (allowing for a process in a container to start a new container, but not to change its own 'container nature')
I don't think we want to allow a contained process to break out of its container, even with a new capability.
I'll have to reflect on this idea/approach to understand its goal and see if it solves a challenge we currently have...
I think this approach may have a few advantages: a) It provides a established mechanism (the capabilities subsystem), to provide a gated point at which the above container definition becomes locked from the perspective of the initial process inside the container. While the parent may re-grant the privlidge, it creates an environment whereby dropping the capability both locks the nature of what the container is (a unique set of namespaces and cgroups), and provides a unique id to the container for audit purposes
As indicated above, we think we don't want to prevent a task in a container from creating a new namespace that would inherit its creator's contid.
b) It removes the need to manage container id's from userspace, reducing the risk for errors in assigning duplicate identifiers, and reduces code size by removing the need to create a new proc file and validate the information passed through it.
We had decided that we wanted to delegate that responsibility to userspace intentionally. How would you propose discovering the newly created contid if it were assigned from the kernel?
c) It reduces complexity in the kernel code. The kernel can guarantee unique identifiers using the gating CAP_AUDIT_SETNS flag as the moment to generate the id.
This also (may) smooth upstream acceptace, because from a kernel standpoint, all we're doing is adding a capability to an established interface, and generating a nonce for the purposes of audit on the dropping of that capability. From user space its nice because we can consider the dropping of the ability to enter a new namespace as the border between a process being outside of, and inside of, a container.
Interesting... More reflection required...
We may have already gone down the path of the existing implementation to consider a change of this magnitude, but I wanted to bring this up before we were locked into a design
I think we still have that flexibility.
On 2018-12-24 04:23, Neil Horman wrote:
Right, I think its CAP_AUDIT_CONTROL, but no matter, and you are absolutely right, the container id is unwriteable by any process that doesn't have that capability. That said, I was less worried about a contained process changing its ID, and more worried about a contained process changing properties that an orchestrator might associate. i.e. a contained process may have its container id be fixed, but it could easily still call setns on it self, and enter the namespace of another process, breaking whatever association the orchestrator might have assumed would be established. My question was really an attempt to enforce that implied mapping by creating a capability control that could block entry to other processes namespaces in such a way that the orcestrator could preserve its notion of what a container was, without imbuing the kernel with any knowledge of containers (and hopefully at the same time, creating a trigger mechanism that the kernel audit code could use to auto-generate said ids). If you think thats too much information in the kernel, thats fine, but I wanted to ask the question.
It gets slightly more confusing if you want to allow nested container orchestrators, but you're already dealing with other problems if you are going this route. Yeah, I'm not super worried about that (though bifurcating capability control between setns and unshare might be useful here, at least for the purposes of this conversation)
Once again, managing an audit container ID is gated by CAP_AUDIT_ADMIN so it is unlikely to be an issue. It is also worth noting that once the process spawns any children, or additional threads, you can't change the audit container ID.
Yes, but its not changing the audit container id that I'm asking about, its changing the namespaces that a given set of processes with an immutable container id that I'm concerned with. Maybe it doesn't matter for the purposes of audit, but it seems like it should be. As an example, Process A is spawned by an orchestator, and assigned net namespace 1, and container id 10. If Process A then forks a child with CLONE_NEWNET set, creating process B, and it (process A) then calls setns(
, ), we have a situation in which process A has entered a new net namespace, but kept its old container id, all without the orchestrators knoweldge. My question is, does the orchestrator care? I'm assuming here that the entire purpose of creating a container id is to have a simple handle to refer to a unique set of namespaces and cgroups that the orchestrator can track, and allowing this change breaks the implied mapping of that handle to those namespaces. If it doesn't matter, then please let me know, and I can drop this entirely, it just seems like it should.
The orchestrator should not care about a process creating a new namespace, and if it does, it should remove the capability that allows it to do so (CAP_SYS_ADMIN). (That raises the question about creating a new capability for managing namespaces since the capability that currently gates that action is a bit overloaded.)
I had previously thought this through and there was something else preventing a process from setting its own namespace to cross into another container's space, but I'm not reemberhing it now... It is certainly possible for mulitple containers to share a namespace, which is addressed towards the end of the v4 patchset.
@rgbriggs @pcmoore Forgive me for consolidating your above responses, but the conversation it getting lengthy and I'm having trouble keeping up with all the comments. To abbreviate your thoughts on my proposed design changes @rgbriggs please feel free to reflect and comment on them as you see fit, but my goal with them was really twofold:
1) To understand what the functional goal was in this patch set from a userspace semantics standpoint (i.e. what does a container id mean to an orchestrator)
2) To suggest some improvements to the implementation of those semantics, should my assumptions about (1) be correct
If you think there are improvements to be made with my suggestions/thoughts, great. If no, thats also fine.
I think, based on what you have both said, this is my understanding of the user space semantics, as you see them:
a) A container id is a write once nonce, set by an orchestrator on an initial process in a container (for some arbitrary definition of the term container), and inherited by its children. Once set, it is immutable.
b) A container id is assigned to a process and its children, but has no fixed correlation to the same set of namespaces and cgroups. If an orchestrator wishes to make the set of processes with a given container id have a fixed set of namespaces and cgroups, it (the orchestrator) should drop the lead processes CAP_SYS_ADMIN capability prior to it forking any children
c) The uniqueness of a container is managed in userspace. It is the responsibility of an orchestrator to ensure that all containers in a system (for any definition of container is wishes to enforce) have a unique id, or that multiple containers sharing an id do so according to a sane policy.
Do you both agree with points (a),(b), and (c)? If not, please correct me. If you do agree, then the comments below become valid:
1) Regarding point (a), it makes sense to me, more or less. My goal with my alternate proposal was to take the generation of a container id out of the hands of userspace so as to ease the mechanics of generating said nonce (doing it in the kernel allows for uniqueness very easily, but requires embedding policy in the kernel to trigger its generation based on a set of events, which is tantamount to the kernel enforcing what a container is). I would like to point out that what you are describing with this nonce also sounds very similar to a session id to me (i.e. a process that calls setsid() to start a new session could be considered a container in the same way that your container id would denote it, and potentially be usable without any kernel changes). Just some food for thought.
2) Regarding point (b), I'm fine with that. Userspace can very easily drop CAP_SYS_ADMIN to prevent the unsharing of namespaces within a process tree. That said, while the fork system call gates the unsharing of namespaces with that capability, the unshare and setns system calls do not appear to, so there is I think some additional work required here to enforce this capability as it pertains to namespaces. As a philosophy however, I'm definately on board with the idea of using this capability to gate namespace creation and assignment.
3) Regarding point (c), This actually worries me alot. While I understand the desire to manage container id assignement in user space, It relies on the assumption that there is a single orchestrator running in userspace at one time. Any single orchestrator is capable of ensuring each container receives a unique id, but the interface as designed makes no allowance for the parallel execution of two orchestrators. It would simple to obfuscate the audit logs by simply having two copies of openshift running. Any sufficiently privileged process can write the container id of any process, and duplicate an existing container id, leading that field in the audit log becoming useless or worse, intentionally misleading. I think some rework is called for there.
On 2018-12-25 17:10, Neil Horman wrote:
a) A container id is a write once nonce, set by an orchestrator on an initial process in a container (for some arbitrary definition of the term container), and inherited by its children. Once set, it is immutable.
Correct. My understanding is that an orchestrator can inject commands into a container (usually for config) and so would need to run a process and "attach" it to an existing container. It is quite likely I've misunderstood and it is somehow communicating with an existing process in that container to get that information across.
b) A container id is assigned to a process and its children, but has no fixed correlation to the same set of namespaces and cgroups. If an orchestrator wishes to make the set of processes with a given container id have a fixed set of namespaces and cgroups, it (the orchestrator) should drop the lead processes CAP_SYS_ADMIN capability prior to it forking any children
I believe so.
c) The uniqueness of a container is managed in userspace. It is the responsibility of an orchestrator to ensure that all containers in a system (for any definition of container is wishes to enforce) have a unique id, or that multiple containers sharing an id do so according to a sane policy.
Yes.
1) Regarding point (a), it makes sense to me, more or less. My goal with my alternate proposal was to take the generation of a container id out of the hands of userspace so as to ease the mechanics of generating said nonce (doing it in the kernel allows for uniqueness very easily, but requires embedding policy in the kernel to trigger its generation based on a set of events, which is tantamount to the kernel enforcing what a container is). I would like to point out that what you are describing with this nonce also sounds very similar to a session id to me (i.e. a process that calls setsid() to start a new session could be considered a container in the same way that your container id would denote it, and potentially be usable without any kernel changes). Just some food for thought.
This embedding of the container definition policy enforcement in the kernel was the exact objection of Casey Schauffler.
I will have to look at setsid more closely. I assumed you were talking about the audit sessionid which has been raised during the design proposals along with loginuid, but they aren't quite the same.
2) Regarding point (b), I'm fine with that. Userspace can very easily drop CAP_SYS_ADMIN to prevent the unsharing of namespaces within a process tree. That said, while the fork system call gates the unsharing of namespaces with that capability, the unshare and setns system calls do not appear to, so there is I think some additional work required here to enforce this capability as it pertains to namespaces. As a philosophy however, I'm definately on board with the idea of using this capability to gate namespace creation and assignment.
I believe they are all covered. clone(2) checks CAP_SYS_ADMIN if any CLONE_NEW* flags are present, setns(2) does in each of the ns->ops->install() calls, and unshare(2) checks in unshare_nsproxy_namespaces().
3) Regarding point (c), This actually worries me alot. While I understand the desire to manage container id assignement in user space, It relies on the assumption that there is a single orchestrator running in userspace at one time. Any single orchestrator is capable of ensuring each container receives a unique id, but the interface as designed makes no allowance for the parallel execution of two orchestrators. It would simple to obfuscate the audit logs by simply having two copies of openshift running. Any sufficiently privileged process can write the container id of any process, and duplicate an existing container id, leading that field in the audit log becoming useless or worse, intentionally misleading. I think some rework is called for there.
This would be an issue with parallel or nested orchestrators alike. Parallel orchestrators on one machine had not been considered. This was the reason for my preference of a serial contid generated in the kernel or pseudo-random UUID contid generated by the orchestrator that would be checked for uniqueness upon set. However, my understanding is that would prevent the orchestrator from injecting commands into a container it previously spawned. We had considered allowing an orchestrator to set the contid only of its own descendants.
@rgbriggs Hey, thanks for the response. Answers to your thoughts:
I will have to look at setsid more closely. I assumed you were talking about the audit sessionid which has been raised during the design proposals along with loginuid, but they aren't quite the same.
No, setsid() is the systemcall that assigns a unique session id to a process group leader in the namespace of the process. If called prior to entering any new namespaces, it is unique within the process namespace of the orchestrator, and as such, could be used as an audit container id that is guaranteed to be unique for the lifetime of the container. Using it might also be nice because it uses existing in frastructure to assign a unique id to a process group, which It seems, based on your prior answers, is more or less what you are considering a container. As before, just some food for thought
I believe they are all covered. clone(2) checks CAP_SYS_ADMIN if any CLONE_NEW* flags are present, setns(2) does in each of the ns->ops->install() calls, and unshare(2) checks in unshare_nsproxy_namespaces()
Yep, you're right, I hadn't dug deeply enough, apologies.
This would be an issue with parallel or nested orchestrators alike. Parallel orchestrators on one machine had not been considered. This was the reason for my preference of a serial contid generated in the kernel or pseudo-random UUID contid generated by the orchestrator that would be checked for uniqueness upon set. However, my understanding is that would prevent the orchestrator from injecting commands into a container it previously spawned. We had considered allowing an orchestrator to set the contid only of its own descendants.
I'm not sure I followthe reasoning above. A serial or random UUID I think works just as well as my setsid() suggestion above (arguably better), especially if it allows for a uniqueness guarantee. Is the concern that if the kernel generates a random id, that the orchestrator won't know what the id is, thereby preventing a mapping of the id in the audit log to the process set? If so, thats an easy fix, your write-only proc file can become a read only proc file that exports the random value to the orchestrator. Or is there something more going on here?
On 2018-12-27 11:44, Neil Horman wrote:
Is the concern that if the kernel generates a random id, that the orchestrator won't know what the id is, thereby preventing a mapping of the id in the audit log to the process set? If so, thats an easy fix, your write-only proc file can become a read only proc file that exports the random value to the orchestrator.
The "audit: read container ID of a process" patch does that. It was added as a debug feature, but is being considered more seriously for inclusion due to having added CAP_AUDIT_CONTROL to restrict its use to try to reduce abuse.
I would strongly agree with that. Even if the kernel is not responsible for computation of a unique id, it should be able to validate the uniqueness of an id to ensure the integrity of the audit log in the presence of multiple orchestrators. And allowing that container id to be read back is essential in the event that an orchestrator restarts with containers outstanding, so that the process->container id map can be rebuilt
Test case v1 PR: https://github.com/linux-audit/audit-testsuite/pull/83
Was the CONTAINER_ID patch released in RHEL? What do I need to do on RHEL to produce an audit record with CONTAINER_ID
On 2019-12-22 17:43, secrnd wrote:
Was the CONTAINER_ID patch released in RHEL? What do I need to do on RHEL to produce an audit record with CONTAINER_ID
No, because it hasn't been merged upstream yet. What you could do would be to provide upstream review.
Hi @secrnd, distro specific questions should be directed towards the distros themselves. This GitHub organization/repo is for the upstream development of the Linux audit subsystem, it is not a Red Hat support channel.
V8 post: https://lkml.org/lkml/2019/12/31/229 https://lore.kernel.org/lkml/cover.1577736799.git.rgb@redhat.com/T/#t https://www.redhat.com/archives/linux-audit/2019-December/msg00049.html latest testsuite pr: https://githu.com/linux-audit/audit-testsuite/pull/91 code is also at git://toccata2.tricolour.ca/linux-2.6-rgb.git ghak90-audit-containerID.v8
2020-12-21 post v10 kernel https://www.redhat.com/archives/linux-audit/2020-December/msg00047.html https://lkml.org/lkml/2020/12/21/338 post v10 user https://www.redhat.com/archives/linux-audit/2020-December/msg00059.html https://lkml.org/lkml/2020/12/21/361 This was quickly addressed by the upstream kernel audit maintainer that ACKs on the first patch were questionable, which I acknowledged as being out of date triggering another version.
Split this off from https://github.com/linux-audit/audit-kernel/issues/32, leaving that issue for addressing namespace identifiers in audit records, should they be deemed necessary.
Implement an audit container identifier.
Add the ability to identify a task's assigned container using an audit container identifier. The registration process involves writing a u64 to file
audit_containerid
in the /proc filesystem under the PID of the target container task. This will result in a CONTAINER_ID record to log the event. Subsequent audit events that involve this task will have an auxiliary recordCONTAINER
to identify the container involved.Depends: https://github.com/linux-audit/audit-userspace/issues/51 See: https://github.com/linux-audit/audit-kernel/wiki/RFE-Audit-Container-ID
History: