Closed markstos closed 4 years ago
can you make the image in question available ?
No.
@baude Any idea why ports could be assigned to a second "pause" container instead of the intended one?
How is the pod created? Can you provide the command that was used to launch the pod?
Also, podman inspect
output for both pod and container would be appreciated.
@markstos when using pods, all of the ports are assigned to the infra container. That is normal. Then each subsequent container in the pod joins the infra containers namespace. That is one of our definitions of a pod. As @mheon asked, can you provide the pod command used?
I used a docker-compose.yml
file like this:
version: "3.8"
services:
devenv:
image: devenv-img
build:
context: ./docker/ubuntu-18.04
args:
GITHUB_USERS: "markstos"
container_name: devenv
security_opt:
- seccomp:unconfined
# Expose port 2222 so you can ssh -p 2222 root@localhost
ports:
- "127.0.0.1:2222:22"
- "127.0.0.1:3000:3000"
tmpfs:
- /tmp
- /run
- /run/lock
volumes:
- "/sys/fs/cgroup:/sys/fs/cgroup:ro"
- "./:/home/amigo/unity"
podman-compose
was used, but had to be patched first:
https://github.com/containers/podman-compose/pull/200/commits/af832769a78fa906c34fff9960b938ef6453f63e
podman-compose up -d
using podman version: podman version 2.0.0
podman pod create --name=unity --share net -p 127.0.0.1:3000:3000 -p 127.0.0.1:2222:22
f7829db54fc270e903fa55be97ae192d131c89a3c476ef0220a3942c8e1192fa
0
podman run --name=devenv -d --pod=unity --security-opt seccomp=unconfined --label io.podman.compose.config-hash=123 --label io.podman.compose.project=unity --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=devenv --tmpfs /tmp --tmpfs /run --tmpfs /run/lock -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /home/mark/git/unity/./:/home/amigo/unity --add-host devenv:127.0.0.1 --add-host devenv:127.0.0.1 devenv-img
50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9
Here's the inspect
output for the container:
podman inspect devenv
[
{
"Id": "50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9",
"Created": "2020-06-23T15:52:29.053978355-04:00",
"Path": "/usr/bin/fish",
"Args": [
"-c",
"exec /sbin/init --log-target=journal 3>&1"
],
"State": {
"OciVersion": "1.0.2-dev",
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 2457442,
"ConmonPid": 2457430,
"ExitCode": 0,
"Error": "",
"StartedAt": "2020-06-23T15:52:32.468351379-04:00",
"FinishedAt": "0001-01-01T00:00:00Z",
"Healthcheck": {
"Status": "",
"FailingStreak": 0,
"Log": null
}
},
"Image": "471497bb87d25cf7d9a2df9acf516901e38c34d93732b628a42ce3e2a2fc5099",
"ImageName": "localhost/devenv-img:latest",
"Rootfs": "",
"Pod": "f7829db54fc270e903fa55be97ae192d131c89a3c476ef0220a3942c8e1192fa",
"ResolvConfPath": "/run/user/1000/containers/vfs-containers/4054570f5694e73f1297c76e4d59ec482b5e03cf006bc5ebfe63fe44362a6235/userdata/resolv.conf",
"HostnamePath": "/run/user/1000/containers/vfs-containers/50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9/userdata/hostname",
"HostsPath": "/run/user/1000/containers/vfs-containers/4054570f5694e73f1297c76e4d59ec482b5e03cf006bc5ebfe63fe44362a6235/userdata/hosts",
"StaticDir": "/home/mark/.local/share/containers/storage/vfs-containers/50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9/userdata",
"OCIConfigPath": "/home/mark/.local/share/containers/storage/vfs-containers/50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9/userdata/config.json",
"OCIRuntime": "runc",
"LogPath": "/home/mark/.local/share/containers/storage/vfs-containers/50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9/userdata/ctr.log",
"LogTag": "",
"ConmonPidFile": "/run/user/1000/containers/vfs-containers/50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9/userdata/conmon.pid",
"Name": "devenv",
"RestartCount": 0,
"Driver": "vfs",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"EffectiveCaps": [
"CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"BoundingCaps": [
"CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"ExecIDs": [],
"GraphDriver": {
"Name": "vfs",
"Data": null
},
"Mounts": [
{
"Type": "bind",
"Name": "",
"Source": "/sys/fs/cgroup",
"Destination": "/sys/fs/cgroup",
"Driver": "",
"Mode": "",
"Options": [
"noexec",
"nosuid",
"nodev",
"rbind"
],
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Name": "",
"Source": "/home/mark/Documents/RideAmigos/git/unity",
"Destination": "/home/amigo/unity",
"Driver": "",
"Mode": "",
"Options": [
"rbind"
],
"RW": true,
"Propagation": "rprivate"
}
],
"Dependencies": [
"4054570f5694e73f1297c76e4d59ec482b5e03cf006bc5ebfe63fe44362a6235"
],
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": [],
"SandboxKey": ""
},
"ExitCommand": [
"/usr/bin/podman",
"--root",
"/home/mark/.local/share/containers/storage",
"--runroot",
"/run/user/1000/containers",
"--log-level",
"error",
"--cgroup-manager",
"cgroupfs",
"--tmpdir",
"/run/user/1000/libpod/tmp",
"--runtime",
"runc",
"--storage-driver",
"vfs",
"--events-backend",
"file",
"container",
"cleanup",
"50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9"
],
"Namespace": "",
"IsInfra": false,
"Config": {
"Hostname": "50edda8bf329",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:~/.yarn/bin",
"TERM=xterm",
"container=podman",
"YARN_VERSION=1.10.1",
"MONGO_VERSION=4.2.8",
"NODE_VERSION=12.15.0",
"LANG=C.UTF-8",
"MONGO_MAJOR=4.2",
"GPG_KEYS=E162F504A20CDF15827F718D4B7C549A058F8B6B",
"HOME=/root",
"NPM_CONFIG_LOGLEVEL=info",
"HOSTNAME=50edda8bf329"
],
"Cmd": [
"-c",
"exec /sbin/init --log-target=journal 3>&1"
],
"Image": "localhost/devenv-img:latest",
"Volumes": null,
"WorkingDir": "/unity",
"Entrypoint": "/usr/bin/fish",
"OnBuild": null,
"Labels": {
"com.docker.compose.container-number": "1",
"com.docker.compose.service": "devenv",
"io.podman.compose.config-hash": "123",
"io.podman.compose.project": "unity",
"io.podman.compose.version": "0.0.1",
"maintainer": "mark@rideamigos.com"
},
"Annotations": {
"io.container.manager": "libpod",
"io.kubernetes.cri-o.ContainerType": "container",
"io.kubernetes.cri-o.Created": "2020-06-23T15:52:29.053978355-04:00",
"io.kubernetes.cri-o.SandboxID": "unity",
"io.kubernetes.cri-o.TTY": "false",
"io.podman.annotations.autoremove": "FALSE",
"io.podman.annotations.init": "FALSE",
"io.podman.annotations.privileged": "FALSE",
"io.podman.annotations.publish-all": "FALSE",
"io.podman.annotations.seccomp": "unconfined",
"org.opencontainers.image.stopSignal": "37"
},
"StopSignal": 37,
"CreateCommand": [
"podman",
"run",
"--name=devenv",
"-d",
"--pod=unity",
"--security-opt",
"seccomp=unconfined",
"--label",
"io.podman.compose.config-hash=123",
"--label",
"io.podman.compose.project=unity",
"--label",
"io.podman.compose.version=0.0.1",
"--label",
"com.docker.compose.container-number=1",
"--label",
"com.docker.compose.service=devenv",
"--tmpfs",
"/tmp",
"--tmpfs",
"/run",
"--tmpfs",
"/run/lock",
"-v",
"/sys/fs/cgroup:/sys/fs/cgroup:ro",
"-v",
"/home/mark/Documents/RideAmigos/git/unity/./:/home/amigo/unity",
"--add-host",
"devenv:127.0.0.1",
"--add-host",
"devenv:127.0.0.1",
"devenv-img"
]
},
"HostConfig": {
"Binds": [
"/sys/fs/cgroup:/sys/fs/cgroup:ro,rprivate,noexec,nosuid,nodev,rbind",
"/home/mark/Documents/RideAmigos/git/unity:/home/amigo/unity:rw,rprivate,rbind"
],
"CgroupMode": "host",
"ContainerIDFile": "",
"LogConfig": {
"Type": "k8s-file",
"Config": null
},
"NetworkMode": "container:4054570f5694e73f1297c76e4d59ec482b5e03cf006bc5ebfe63fe44362a6235",
"PortBindings": {},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": [],
"CapDrop": [],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": [
"devenv:127.0.0.1",
"devenv:127.0.0.1"
],
"GroupAdd": [],
"IpcMode": "private",
"Cgroup": "",
"Cgroups": "default",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "private",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined"
],
"Tmpfs": {
"/run": "rw,rprivate,nosuid,nodev,tmpcopyup",
"/run/lock": "rw,rprivate,nosuid,nodev,tmpcopyup",
"/tmp": "rw,rprivate,nosuid,nodev,tmpcopyup"
},
"UTSMode": "private",
"UsernsMode": "",
"ShmSize": 65536000,
"Runtime": "oci",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "/libpod_parent/f7829db54fc270e903fa55be97ae192d131c89a3c476ef0220a3942c8e1192fa",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": 0,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
}
}
]
I don't see an option to run podman inspect
on pods.
podman pod inspect
any chance we can sync up on irc? freenode.net #podman
btw, can couple of simple things we should have asked. apologies if i missed the information.
all of the ports are assigned to the infra container.
Did I miss this in the docs? It's not intuitive to have port mappings appear on a container other than the one I installed. I wasn't thrilled to see the "pause" container from a third-party service on the internet that I had no intention of pulling down content from either.
can you see the ssh process running with ps
No. I presume that means I happened to break my own container about the time I also upgraded podman
. I'm trying to get the container running under Docker now as a second point of reference.
Network mode is set to another container, which I'm assuming is the infra container (I don't see the ID in question in your first podman ps
so perhaps you recreated). Container config on the whole seems fine, so I no longer believe this is a network issue, but is probably related to the SSH daemon itself.
What init are you using in the container, systemd or something else?
@baude One obvious thing: podman ps
isn't displaying ports correctly.
1.9:
b4b47beefd3d registry.fedoraproject.org/fedora:latest bash 1 second ago Up 1 second ago 0.0.0.0:2222->22/tcp serene_tu
182529b785b3 registry.fedoraproject.org/fedora:latest bash 15 seconds ago Exited (0) 9 seconds ago 0.0.0.0:2222->22/tcp pensive_chaum
64d111e06042 k8s.gcr.io/pause:3.2 35 seconds ago Up 15 seconds ago 0.0.0.0:2222->22/tcp 46ce3d0db44c-infra
2.0:
182529b785b3 registry.fedoraproject.org/fedora:latest bash 20 seconds ago Exited (0) 13 seconds ago pensive_chaum
3f4e33ba8a41 registry.fedoraproject.org/fedora:latest bash 5 days ago Exited (0) 5 days ago testctr1
64d111e06042 k8s.gcr.io/pause:3.2 39 seconds ago Up 19 seconds ago 0.0.0.0:2222->22/tcp 46ce3d0db44c-infra
Hm. It's also ordering containers incorrectly... I'd expect sort to be by time of creation, not by ID.
I'm using systemd
. I was ssh'ing in fine before the upgrade. But I also have been tweaking the configuration all day, so it could be something on my end.
I built a test setup as close to yours as I could given provided information (pod with port 2222 forwarded, container in that pod with systemd as init + sshd, added a user, SSH'd in from another machine to public port, all rootless) and everything worked locally, so I think this is either environment, or some detail of the pod that is not clear from what is given here.
I'm on Kubernetes Slack server now. I forgot my IRC password.
@mheon Thanks for the attention. I'll test more with Docker as a control group reference and see if I can pinpoint some bug on my end that I introduced.
It booted fine with docker-compose up -d
but not podman-compose up -d
.
The plot thickens.
I'll see if I can some more useful case for you to reproduce from.
I've temporarily posted my Dockerfile here:
https://gist.github.com/markstos/9f7b982bc73106e4bb5a73e5524a3ec6
Once you've grabbed it, I'm going to take down the Gist.
I believe the last two things I was changing before it broke were setting fish_user_paths, and looping over users to add their SSH keys to authorized_keys-- both happen in the last 20 lines of the file.
Grabbed, thanks. It's a little late in the day here, but I'll pick this up tomorrow and see if I can chase something down.
Might be a compose-specific bug, or might be a result of upgrading an existing 1.9 system to 2.0
I've reduced the test a case a bit. Here's a script I successfully used to launch the container with 1.9 that fails with 2.0:
#!/bin/bash
podman run --detach \
--name devenv \
--security-opt "seccomp=unconfined" \
--tmpfs /tmp \
--tmpfs /run \
--tmpfs /run/lock \
--volume /sys/fs/cgroup:/sys/fs/cgroup:ro \
--volume '../../:/home/amigo/unity' \
--publish "127.0.0.1:2222:22" \
--publish "127.0.0.1:3000:3000" \
devenv-img
The result is the same-- it starts without apparent error, but I can't SSH in. This eliminates anything to do with pods.
Using ps
I can confirm that there's an init
process running running under the expected user account but no sshd
process.
I'm going to try to rollback recent changes to my Dockerfile assuming that my changes broke it, not podman
.
I'd recommend checking the journal within the container to see why sshd is failing. Also, checking if port forwarding works at all would be helpful - if you use 8080:80 with a simple nginx container, can you access it?
Partial fix for the podman ps
issues I noticed in #6761
@mheon how I can check the journal in the container if I can't get into it?
I tried this to narrow down the issue: I rewrote my start
command to give me an interactive shell instead of starting system. Then within the shell I started sshd
manually with sshd -D
-- that's how systemd would start it. Then I tried to SSH in, and that worked. I double checked that systemd is set to start SSH at boot. So something changed which resulted in sshd not running when booted with systemd
.
I don't think port-forwarding is the issue, since ps
shows no sshd
process running.
@markstos podman exec -t -i $CONTAINERNAME journalctl
?
one idea that might pay off would be to run the container (even in the pod) manually with -it /bin/sh and then run the sshd binary by itself. this shoudl let you see if the binary actually runs and the you can check the "wiring". btw, is anything being puked out in the container logs?
@mheon The command worked, but found no logs.
@baude I did that (noted in a comment from about 30 minutes ago). sshd runs in that context. podman container logs devenv
shows nothing.
I found the root cause by running with systemd but also -ti
so I could watch it boot. I don't know what this means, though:
Welcome to Ubuntu 18.04.4 LTS!
Set hostname to <26058b6f356f>.
Failed to read AF_UNIX datagram queue length, ignoring: No such file or directory
Failed to create /user.slice/user-1000.slice/session-2.scope/init.scope control group: Permission denied
Failed to allocate manager object: Permission denied
[!!!!!!] Failed to allocate manager object, freezing.
Freezing execution.
I stepping AFK now, but am in the Kubernetes Slack server if that's helpful.
There's a little more logging before that final error message:
systemd 237 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
Detected virtualization container-other.
Detected architecture x86-64.
Hm. Can you try removing --volume /sys/fs/cgroup:/sys/fs/cgroup:ro
and adding --systemd=always
?
This will cause Podman to automatically prepare the container for having systemd run in it, including configuring an appropriate cgroup mount.
@mheon I tried that, but got the same result. I found a related RedHat bug about it:
People also ran into the same issue in the past few months after upgrading LXC: https://bugs.funtoo.org/browse/FL-6897
Unprivileged systemd containers quit working for them too.
This issue also sounds related, and was fixed only for the root case, not the rootless case.
I tried to generate a reduced-test-case with a container that just contained systemd and sshd, but that triggered a different failure:
podman run --systemd always --privileged -d -p "127.0.0.1:2222:22" minimum2scp/systemd-stretch
Trying to pull docker.io/minimum2scp/systemd-stretch...
Getting image source signatures
Copying blob eec13681aaa4 done
Copying blob 2217437ef5a2 done
Copying blob 82ed86786e13 done
Copying blob 063d2793dea0 done
Copying blob 11a85ad34c0b done
Copying config f03c1e5ac4 done
Writing manifest to image destination
Storing signatures
Error: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"tmpfs\\\" to rootfs \\\"/home/mark/.local/share/containers/storage/vfs/dir/1c2d9d6c99338794c37601792cfe73a34ade17655c87a9ab8f6b8f2c65605ad7\\\" at \\\"/tmp/runctop733130215/runctmpdir915618330\\\" caused \\\"tmpcopyup: failed to copy /home/mark/.local/share/containers/storage/vfs/dir/1c2d9d6c99338794c37601792cfe73a34ade17655c87a9ab8f6b8f2c65605ad7/run to /tmp/runctop733130215/runctmpdir915618330: lchown /tmp/runctop733130215/runctmpdir915618330/initctl: no such file or directory\\\"\"": OCI runtime command not found error
A variation of that that same image produced the same failure mode:
podman run -d -p "127.0.0.1:2222:22" minimum2scp/systemd
Trying to pull docker.io/minimum2scp/systemd...
Getting image source signatures
Copying blob d4aaedabb7de done
Copying blob 2b2c197bb397 done
Copying blob c1e7846c2b6e done
Copying blob e51a3c06332d done
Copying blob 938abdf43fa0 done
Copying config af8b425bf0 done
Writing manifest to image destination
Storing signatures
Error: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"tmpfs\\\" to rootfs \\\"/home/mark/.local/share/containers/storage/vfs/dir/ec4f769dd31bda883f7271f7cc68ae37484e36ec29a19881c972b1c8c6fc35f1\\\" at \\\"/tmp/runctop953882458/runctmpdir205823217\\\" caused \\\"tmpcopyup: failed to copy /home/mark/.local/share/containers/storage/vfs/dir/ec4f769dd31bda883f7271f7cc68ae37484e36ec29a19881c972b1c8c6fc35f1/run to /tmp/runctop953882458/runctmpdir205823217: lchown /tmp/runctop953882458/runctmpdir205823217/initctl: no such file or directory\\\"\"": OCI runtime command not found error
Great, I have a one-line reduced-test for you that fails in the same way:
podman run -d -p "127.0.0.1:2222:22" solita/ubuntu-systemd-ssh
After running this, I can't ssh to the container and ps
shows no sshd
process running. I'm going to debug a bit more now.
Yep, there you go, this is my issue in a nutshell:
podman run --systemd=always -it -p "127.0.0.1:2222:22" solita/ubuntu-systemd-ssh
systemd 229 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN)
Detected virtualization container-other.
Detected architecture x86-64.
Welcome to Ubuntu 16.04.5 LTS!
Set hostname to <81f350354616>.
Initializing machine ID from D-Bus machine ID.
Failed to read AF_UNIX datagram queue length, ignoring: No such file or directory
Failed to install release agent, ignoring: Permission denied
Failed to create /user.slice/user-1000.slice/session-2.scope/init.scope control group: Permission denied
Failed to allocate manager object: Permission denied
[!!!!!!] Failed to allocate manager object, freezing.
Freezing execution.
Does anyone have a copy of podman 1.9 handy to confirm if the reduced test case above worked before 2.0?
Is the host on Fedora with cgroup V2 or V1? Ubuntu V1?
Does everything work if you run as root?
The issue might be with cgroup V1 and systemd not being allowed to write to it.
The host is Ubuntu 20.04.
On Thu, Jun 25, 2020, 8:21 AM Daniel J Walsh notifications@github.com wrote:
Does everything work if you run as root?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/containers/libpod/issues/6734#issuecomment-649507331, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAGJZPINXFKMPSWSH6RB5LRYM6NVANCNFSM4OF56IIQ .
@rhatdan, first it fails as root on Ubuntu 20.04 as well, but with a different error:
sudo podman run --systemd=always -it -p "127.0.0.1:2222:22" solita/ubuntu-systemd-ssh
Error: AppArmor profile "container-default" specified but not loaded
The system supports cgroupsv2:
$ grep cgroup /proc/filesystems
nodev cgroup
nodev cgroup2
@rhatdan Does this work on your system?
podman run --systemd=always -it -p "127.0.0.1:2222:22" solita/ubuntu-systemd-ssh
I think it's cgroups related:
12:cpuset:/
11:pids:/user.slice/user-1000.slice/session-18506.scope
10:rdma:/
9:perf_event:/
8:blkio:/user.slice
7:net_cls,net_prio:/
6:cpu,cpuacct:/user.slice
5:hugetlb:/
4:devices:/user.slice
3:memory:/user.slice
2:freezer:/
1:name=systemd:/user.slice/user-1000.slice/session-18506.scope
0::/user.slice/user-1000.slice/session-18506.scope
The user slice is persisting from the host.. in the container shouldn't systemd be /?
Maybe rootless can't change that.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I was repeatedly building working containers with podman this morning when my OS (Ubuntu 20.04) notified me that podman 2.0 was available and I elected to install it.
Shortly afterword, I can no longer SSH to a newly build and launched container. I see this as output to
podman container list -a
:This is frustrating: I don't any references to a container named "pause", yet one is running and listening on the ports my container had published, yet my container isn't listening on any ports at all.
I read the
podman
2.0 release notes and don't see any notes about a related breaking change.I did search the project for references to "infra containers" because I sometimes see that term mentioned in error messages. I find references to "infra containers" in the code, but I can't find references in the documentation.
They seem related to this issue and it would be great if there was more accessible user documentation about "infra containers"
Steps to reproduce the issue:
Describe the results you received:
Initializing machine ID from random generator. Failed to create /user.slice/user-1000.slice/session-8.scope/init.scope control group: Permission denied Failed to allocate manager object: Permission denied [!!!!!!] Failed to allocate manager object.
Describe the results you expected:
For this test, the container should boot to the point where this line appears:
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):