Open felipehw opened 3 years ago
What's the output from:
$ podman start --attach rhel-toolbox-8.2
$ podman start --attach rhel-toolbox-8.2
level=debug msg="Running as real user ID 0"
level=debug msg="Resolved absolute path to the executable as /usr/bin/toolbox"
level=debug msg="TOOLBOX_PATH is /usr/bin/toolbox"
level=debug msg="Creating /run/.toolboxenv"
level=debug msg="Monitoring host"
level=debug msg="Path /run/host/etc exists"
level=debug msg="Resolved /etc/localtime to /run/host/usr/share/zoneinfo/America/Sao_Paulo"
level=debug msg="Binding /etc/machine-id to /run/host/etc/machine-id"
level=debug msg="Creating /run/systemd/journal"
level=debug msg="Binding /run/systemd/journal to /run/host/run/systemd/journal"
level=debug msg="Creating /run/udev/data"
level=debug msg="Binding /run/udev/data to /run/host/run/udev/data"
level=debug msg="Creating /tmp"
level=debug msg="Binding /tmp to /run/host/tmp"
level=debug msg="Creating /var/lib/flatpak"
level=debug msg="Binding /var/lib/flatpak to /run/host/var/lib/flatpak"
level=debug msg="Creating /var/lib/systemd/coredump"
level=debug msg="Binding /var/lib/systemd/coredump to /run/host/var/lib/systemd/coredump"
level=debug msg="Creating /var/log/journal"
level=debug msg="Binding /var/log/journal to /run/host/var/log/journal"
level=debug msg="Creating /var/mnt"
level=debug msg="Binding /var/mnt to /run/host/var/mnt"
mount: /var/mnt: wrong fs type, bad option, bad superblock on /run/host/var/mnt, missing codepage or helper program, or other error.
Error: failed to bind /var/mnt to /run/host/var/mnt
$
These are the important lines:
level=debug msg="Binding /var/mnt to /run/host/var/mnt"
mount: /var/mnt: wrong fs type, bad option, bad superblock on /run/host/var/mnt, missing codepage or helper program, or other error.
Error: failed to bind /var/mnt to /run/host/var/mnt
Looks like there's a /var/mnt
on your host, but there's something funky going on with it. What do you have there?
$ ls -l /
total 28
lrwxrwxrwx. 6 root root 8 abr 28 2020 home -> var/home
lrwxrwxrwx. 6 root root 7 abr 28 2020 mnt -> var/mnt
drwxr-xr-x. 24 root root 4096 fev 8 13:00 var
$ ls -l /var/
total 96
drwxr-xr-x. 4 root root 4096 abr 28 2020 home
drwxr-xr-x. 4 root root 4096 abr 29 2020 mnt
$ ls -l /var/mnt/
total 8
drwxr-xr-x. 2 root root 4096 abr 29 2020 backup
drwxr-xr-x. 5 root root 4096 abr 29 2020 home
$
$ cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Apr 28 18:47:33 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/fedora-root / ext4 defaults,x-systemd.device-timeout=0 1 1
UUID=f50db559-4e70-473b-9d3b-735ae227a73d /boot ext4 defaults 1 2
/dev/mapper/fedora-home /home ext4 defaults,x-systemd.device-timeout=0 1 2
/dev/disk/by-uuid/45a9b36c-d29c-434c-a966-2c1a21a3d171 /mnt/home ext4 defaults,x-gvfs-show,x-systemd.device-timeout=40 1 2
/dev/disk/by-id/usb-Seagate_Expansion_NAA2ZSY9-0:0-part1 /mnt/backup auto nosuid,nodev,nofail,noauto,x-gvfs-show 0 0
$
I had trouble running rhel-toolbox-8.3 on my SB33, which I reported in
https://bugzilla.redhat.com/show_bug.cgi?id=1942576
but right now it is working for me, after I ran podman start --attach rhel-toolbox-8.3
.
$ toolbox enter rhel-toolbox-8.3
Error: invalid entry point PID of container rhel-toolbox-8.3
$ podman start --attach rhel-toolbox-8.3
level=debug msg="Running as real user ID 0"
level=debug msg="Resolved absolute path to the executable as /usr/bin/toolbox"
level=debug msg="TOOLBOX_PATH is /usr/bin/toolbox"
level=debug msg="Creating /run/.toolboxenv"
level=debug msg="Monitoring host"
level=debug msg="Path /run/host/etc exists"
level=debug msg="Resolved /etc/localtime to /run/host/usr/share/zoneinfo/America/Sao_Paulo"
level=debug msg="Binding /etc/machine-id to /run/host/etc/machine-id"
level=debug msg="Creating /run/systemd/journal"
level=debug msg="Binding /run/systemd/journal to /run/host/run/systemd/journal"
level=debug msg="Creating /run/udev/data"
level=debug msg="Binding /run/udev/data to /run/host/run/udev/data"
level=debug msg="Creating /tmp"
level=debug msg="Binding /tmp to /run/host/tmp"
level=debug msg="Creating /var/lib/flatpak"
level=debug msg="Binding /var/lib/flatpak to /run/host/var/lib/flatpak"
level=debug msg="Creating /var/lib/systemd/coredump"
level=debug msg="Binding /var/lib/systemd/coredump to /run/host/var/lib/systemd/coredump"
level=debug msg="Creating /var/log/journal"
level=debug msg="Binding /var/log/journal to /run/host/var/log/journal"
level=debug msg="Creating /var/mnt"
level=debug msg="Binding /var/mnt to /run/host/var/mnt"
mount: /var/mnt: wrong fs type, bad option, bad superblock on /run/host/var/mnt, missing codepage or helper program, or other error.
Error: failed to bind /var/mnt to /run/host/var/mnt
@felipehw what do you have something on your system /mnt or /var/mnt?
$ ls -l / total 28 lrwxrwxrwx. 6 root root 8 abr 28 2020 home -> var/home lrwxrwxrwx. 6 root root 7 abr 28 2020 mnt -> var/mnt drwxr-xr-x. 24 root root 4096 fev 8 13:00 var $ ls -l /var/ total 96 drwxr-xr-x. 4 root root 4096 abr 28 2020 home drwxr-xr-x. 4 root root 4096 abr 29 2020 mnt $ ls -l /var/mnt/ total 8 drwxr-xr-x. 2 root root 4096 abr 29 2020 backup drwxr-xr-x. 5 root root 4096 abr 29 2020 home $
$ cat /etc/fstab # # /etc/fstab # Created by anaconda on Tue Apr 28 18:47:33 2020 # # Accessible filesystems, by reference, are maintained under '/dev/disk/'. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info. # # After editing this file, run 'systemctl daemon-reload' to update systemd # units generated from this file. # /dev/mapper/fedora-root / ext4 defaults,x-systemd.device-timeout=0 1 1 UUID=f50db559-4e70-473b-9d3b-735ae227a73d /boot ext4 defaults 1 2 /dev/mapper/fedora-home /home ext4 defaults,x-systemd.device-timeout=0 1 2 /dev/disk/by-uuid/45a9b36c-d29c-434c-a966-2c1a21a3d171 /mnt/home ext4 defaults,x-gvfs-show,x-systemd.device-timeout=40 1 2 /dev/disk/by-id/usb-Seagate_Expansion_NAA2ZSY9-0:0-part1 /mnt/backup auto nosuid,nodev,nofail,noauto,x-gvfs-show 0 0 $
@juhp
I see:
/dev/disk/by-id/usb-Seagate_Expansion_NAA2ZSY9-0:0-part1 /mnt/backup auto nosuid,nodev,nofail,noauto,x-gvfs-show 0 0
I see:
/dev/disk/by-id/usb-Seagate_Expansion_NAA2ZSY9-0:0-part1 /mnt/backup auto nosuid,nodev,nofail,noauto,x-gvfs-show 0 0
@juhp But is there a problem with this setup?
I thought the /mnt
was the right place to put my fixed mounting units.
I don't know, but maybe one (or more) of your mount options could be causing a problem :man_shrugging: This is what I use:
UUID=911852e5-45ad-4cfb-80a3-e8c4a5fac450 /var/mnt/extreme ext4 defaults,noauto,user
(my comment was in reply to Rishi's question to you).
Interesting.
@felipehw how did you mount those locations? Did you manually set those up? Or did you just plug the disk in, and let GNOME handle it? I am guessing that there was some manual configuration involved because usually GNOME mounts removable devices under /media
or /run/media
.
I also see that you have a /mnt/home
.
Toolbox bind mounts locations like /mnt
, /var/mnt
, /media
and /run/media
with the rslave
(apologies for the terminology) flag. It means that if a device gets mounted inside one of those locations on the host, then it gets propagated inside the bind mount as well. In other words, devices mounted on the host show up inside the container.
To be honest, I am not sure I tested your particular use-case. I remember testing new mount points showing up on the host after the container was started.
@debarshiray
I'm a (±newbie) Linux user since the 2000s. So I have setups that I reproduce since many years ago.
In my mind, if I wanna a stable and predictable mount point, I need to declare it at /etc/fstab
and the right place to create these static mount points are in /mnt
. Is this correct yet?
I have 2 static mount points.
One big partition with my user content (/mnt/home
). Always mounted.
Other partition from a removable unit for backup. I choose /mnt/backup
so I never need to change my software used for backup ... even if I swap the backup unit for a newer one (These auto-mount features choose their paths and I prefer nothing automatic changing the mount point of this unit).
I thought this is a "standard" setup ...
Actually I realised I am hitting the same problem I think. If I try to start/enter my rhel toolbox with an external drive mounted below /mnt it fails to start up. But I can start it up by umounting first... (and then remounting it) - this doesn't happen with a fedora-toolbox.
I can't enter to any container using any image different from fedora-toolbox-34. I consider it's this same bug because the verbose output contains this suspicious lines.
DEBU Image: 'fedora-toolbox:34'
DEBU Release: '34'
The same lines present on the initial message on this issue (but with 33 instead of 34). In my case I tried with different images of different distros including fedora 33 based images. But only containers using fedora 34 images seems to work. I'd say that toolbox is expecting to find a fedora 34 (or 33) based container regardless of the image actually being in use and that's the cause of this bug.
I have (had) very similar issue that manifested itself in the exact same manner.
At first the enter
would fail with Error: failed to initialize container <NAME>
and on subsequent runs it would give Error: invalid entry point PID of container <NAME>
.
The issue happened only on some older ubuntu containers (18.04) but would work fine on ubuntu 20.04 and fedora containers. The culprit was an NFS mount I had in a subdirectory of /mnt
. But this is not important.
What I would like to discuss it the fact that running with high verbosity (toolbox -vv
) did not help with debugging really. You have to podman start -ai
your container to see the actual logs from the container.
The issue is that successfull run invocation of toolbox -vv
will show container init logs via conmon
but failed invocation will show nothing (while the error is in that output).
This is an issue I find very important.
Yes I am seeing mount fstype errors with various images including centos recently.
$ podman start -ai centos-8
:
level=debug msg="Creating directory /var/mnt"
level=debug msg="Binding /var/mnt to /run/host/var/mnt"
mount: /var/mnt: wrong fs type, bad option, bad superblock on /run/host/var/mnt, missing codepage or helper program, or other error.
Error: failed to bind /var/mnt to /run/host/var/mnt
I have the same problem if I'd run rhel-packager container on Silverblue system. Silverblue does have /var/mnt
which is not a symlink but /mnt
is already a symlink:
$ podman start --attach rhel-packager
[...]
level=debug msg="Preparing to redirect /mnt to /var/mnt"
level=debug msg="/var/mnt isn't a symbolic link"
Error: failed to redirect /mnt to /var/mnt: remove /mnt: directory not empty
$ stat /mnt
File: /mnt -> var/mnt
Size: 7 Blocks: 8 IO Block: 4096 symbolic link
Device: 0,37 Inode: 297 Links: 5
Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
Context: system_u:object_r:mnt_t:s0
Access: 2023-05-17 10:08:01.304448725 +0300
Modify: 2023-03-09 23:29:03.834764314 +0200
Change: 2023-05-16 18:07:04.777748494 +0300
Birth: 2023-03-09 23:29:03.834764314 +0200
$ stat /var/mnt
File: /var/mnt
Size: 0 Blocks: 0 IO Block: 4096 directory
Device: 0,42 Inode: 272 Links: 1
Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Context: system_u:object_r:mnt_t:s0
Access: 2023-05-16 19:03:33.743595860 +0300
Modify: 2023-03-09 23:32:27.341205891 +0200
Change: 2023-05-11 07:01:50.611932070 +0300
Birth: 2023-03-09 23:32:27.341205891 +0200
Seems like an old issue here, but I'm still running into it with these simple commands on Fedora 40:
❯ podman pull atlassian/default-image:4
...
Copying config cd33188942 done |
...
❯ podman tag cd33188942 my-image
❯ toolbox create -i my-image
❯ toolbox enter
Error: invalid entry point PID of container my-image
❯ toolbox --version
toolbox version 0.0.99.5
This happens every single time I reboot my system, and I have to rebuild all of my containers. Luckily I only have two and I have scripts to build them.
Describe the bug
I tried to create a
rhel
container using the new flag--distro
in a Silverblue environment but without success.Steps how to reproduce the behaviour
Output of
toolbox --version
(v0.0.90+)Toolbox package info (
rpm -q toolbox
)Output of
podman version
Podman package info (
rpm -q podman
)Info about your OS
Fedora Silverblue
33.20210208.0 (2021-02-08T00:58:46Z)
Additional context