canonical / lxd

Powerful system container and virtual machine manager
https://canonical.com/lxd
GNU Affero General Public License v3.0
4.38k stars 931 forks source link

apparmor="DENIED" operation="mount" info="failed flags match" name="/run/" pid=22586 comm="mount" flags="rw, nosuid, nodev, remount" #9977

Closed jdstrand closed 2 years ago

jdstrand commented 2 years ago

Required information

Issue description

I stopped some containers and noticed this in the logs:

audit: type=1400 audit(1646058774.323:266): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-foo_</var/snap/lxd/common/lxd>" name="/run/" pid=22586 comm="mount" flags="rw, nosuid, nodev, remount"
stgraber commented 2 years ago

@jdstrand what are you running in that container?

jdstrand commented 2 years ago

@jdstrand what are you running in that container?

2 different Ubuntu 20.04 focal containers:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.4 LTS
Release:    20.04
Codename:   focal
stgraber commented 2 years ago

Interesting, I tried 20.04 here on a 22.04 host and didn't hit it, it likely needs some specific combination of host and container.

jdstrand commented 2 years ago

I tried again just now, on an up to date 18.04 host:

$ cat /proc/version_signature 
Ubuntu 4.15.0-169.177-generic 4.15.18
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.6 LTS
Release:    18.04
Codename:   bionic
$ lxc version
Client version: 4.0.9
Server version: 4.0.9

ssh into the (up to date) container and shut it down:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.4 LTS
Release:    20.04
Codename:   focal
$ sudo shutdown -h now

Then on the host, out pops:

Mar  1 08:21:20 <host> kernel: [74640.020501] audit: type=1400 audit(1646144480.291:300): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-<container>_</var/snap/lxd/common/lxd>" name="/run/" pid=29676 comm="mount" flags="rw, nosuid, nodev, remount"
...
Mar  1 08:21:22 <host> kernel: [74642.307476] audit: type=1400 audit(1646144482.575:301): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-<container>_</var/snap/lxd/common/lxd>" pid=29770 comm="apparmor_parser"
stgraber commented 2 years ago

So 20.04 container on 18.04 host with non-HWE kernel in this case seems to be causing systemd in the container to do a rw remount rather than ro remount.

What storage driver are you using?

stgraber commented 2 years ago

(This should be fixed in any case with the PR I sent, but just interesting to know what's triggering it)

jdstrand commented 2 years ago

So 20.04 container on 18.04 host with non-HWE kernel in this case seems to be causing systemd in the container to do a rw remount rather than ro remount.

What storage driver are you using?

  storage: dir
  storage_version: "1"
jdstrand commented 2 years ago

(This should be fixed in any case with the PR I sent, but just interesting to know what's triggering it)

Curious if this will make it back to 4.0/stable?

stgraber commented 2 years ago

Yeah, it will. 4.0.10 will be our last LTS point releases that includes bugfixes as it will release after 5.0 LTS so that branch will then go to security update only.