The /tmp file system in the global zone of a Gimlet is provided by tmpfs(4FS), which uses swap for backing store; i.e., the combination of physical anonyous memory and any attached swap files.
In its default configuration, one can relatively easily consume all available physical memory and all of the available swap file space, just by writing to /tmp. In practice this means we start paging out large quantities of data to the swap zvol backed by the internal M.2 device, for data that we can't get back after a reboot. If we're hoping to use the M.2 devices for (even temporary) large file storage, we should likely do that explicitly with a ZFS dataset.
Fortunately, we can pick a maximum size for each tmpfs mount in the system, using the size mount option as per mount_tmpfs(8). This can be done in at least two ways:
Remounting an existing tmpfs with a new size option. This essentially just updates the tm_anonmax member of the struct tmount, which is a quota of sorts. It won't reduce the size of any existing file system usage, but if the ark is built before the rain that's fine.
Adding the size option to the vfstab(5) entry that mounts the global zone /tmp file system. This file is shipped in the ramdisk image, so if we decide on a static size we could update the file and it would just be set at initial mount time.
Another thing to investigate is what to do with other tmpfs mounts; e.g., /var/run is ostensibly mounted explicitly by /lib/svc/method/fs-minimal as part of boot and does not appear to be in vfstab. The /etc/svc/volatile is, at least, an implementation detail of SMF and is unlikely to be a place where random processes or operators create a bunch of surprise files.
Note also that every zone also has at least one tmpfs file system, and that those are presumably also unconstrained in size today. In those cases, even if we were to change the size option, this would not constrain the amount of tmpfs used by that zone in aggregate; the zone can mount more tmpfs file systems or change the size option on an existing mount. The zone.max-swap resource control (see resource_controls(7)) constrains all uses of swap, including reservations by processes. While we should be setting this on each zone (to some well-selected value this margin cannot contain) it's possible we actually want a separate, new, zone.max-tmpfs-size to separately constrain this to a potentially smaller value.
The /tmp file system in the global zone of a Gimlet is provided by tmpfs(4FS), which uses swap for backing store; i.e., the combination of physical anonyous memory and any attached swap files.
In its default configuration, one can relatively easily consume all available physical memory and all of the available swap file space, just by writing to /tmp. In practice this means we start paging out large quantities of data to the swap zvol backed by the internal M.2 device, for data that we can't get back after a reboot. If we're hoping to use the M.2 devices for (even temporary) large file storage, we should likely do that explicitly with a ZFS dataset.
Fortunately, we can pick a maximum size for each tmpfs mount in the system, using the size mount option as per mount_tmpfs(8). This can be done in at least two ways:
Another thing to investigate is what to do with other tmpfs mounts; e.g., /var/run is ostensibly mounted explicitly by
/lib/svc/method/fs-minimal
as part of boot and does not appear to be in vfstab. The /etc/svc/volatile is, at least, an implementation detail of SMF and is unlikely to be a place where random processes or operators create a bunch of surprise files.Note also that every zone also has at least one tmpfs file system, and that those are presumably also unconstrained in size today. In those cases, even if we were to change the size option, this would not constrain the amount of tmpfs used by that zone in aggregate; the zone can mount more tmpfs file systems or change the size option on an existing mount. The zone.max-swap resource control (see resource_controls(7)) constrains all uses of swap, including reservations by processes. While we should be setting this on each zone (to some well-selected value this margin cannot contain) it's possible we actually want a separate, new, zone.max-tmpfs-size to separately constrain this to a potentially smaller value.