Closed Neamar closed 1 year ago
Are you using a named volume or a path passed in as a volume?
Even ordinary filesystems have a reserved space which only root can write to. E.g:
$ sudo dumpe2fs -h /dev/mapper/fedora_localhost--live-home
dumpe2fs 1.46.5 (30-Dec-2021)
Filesystem volume name: <none>
Last mounted on: /home
Filesystem UUID: a1c0426a-64a8-4252-801b-be0e904e1cdd
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 26058752
Block count: 104230912
Reserved block count: 5211545
...
Here the reserved blockcount
Is this the case for the filesystem that gets passed into the container? Is it out of its non-reserved space?
Good news is you can change this reserved space with filesystem tools (tune2fs -r
for extX, but others have their own too), especially if its a volume root doesn't write too.
$ sudo tune2fs -r 0 /dev/mapper/fedora_localhost--live-home
tune2fs 1.46.5 (30-Dec-2021)
Setting reserved blocks count to 0
So there is nothing explicit in the container that controls space. There may be contains from the container runtime.
And from you logs here we'll need to fix the SEGV that occurs when this condition occurs. We should clean-up cleaner than this.
You're correct, this did fix my issue. I wasn't aware of this mechanism, thanks for the help!
Since you mentioned you want to do some cleanup, I'm not closing -- feel free to close if you'd rather track that somewhere else, but from my point of view this is all sorted out, thanks!
Releasing reserved space has saved me so many times in DBA work to just get to a clean state before migrating or getting a bit more uptime. Glad to share that.
I tried to reproduce something on generating this error in a crash recovery and was unable to do so on the latest 10.5. So closing.
At some point, I ran out of disk space, which led to a MariaDB 10.5 crash. I cleaned up some files, and tried restarting the container. I got the following error messages:
The error message does say "Probably out of disk space". So I tried to
docker exec -it container bash
, and from /var/lib/mysql I ranfallocate -l 12M ibtmp1
. This worked fine, since I do have disk space.However, if I do
su mysql
and thenfallocate -l 12M ibtmp1
:So... it seems the mysql user is out of space, but not root. Am I missing something? Is there some kind of permission that would prevent mysql from actually writing to this file?
(/var/lib/mysql is mounted as a volume, but I don't know if that's relevant)
tl;dr:
Full logs: