Open aqilbeig opened 7 months ago
Can you upload the full journal contents? sudo journalctl -b0
Can you share your ignition file as well?
Are you blocking modprobe somehow? This line from dmesg suggests something is wrong with module loading in general.
[ 4.447521] request_module fs-squashfs succeeded, but still no fs?
output of cat /etc/modprobe.d/blacklist.conf
blacklist cramfs # CIS v2.0.0 1.1.1.1
blacklist freevxfs # CIS v2.0.0 1.1.1.2
blacklist jffs2 # CIS v2.0.0 1.1.1.3
blacklist hfs # CIS v2.0.0 1.1.1.4
blacklist hfsplus # CIS v2.0.0 1.1.1.5
# Docker and Containerd are now sysext images built with squashfs
# blacklist squashfs # CIS v2.0.0 1.1.1.6
blacklist udf # CIS v2.0.0 1.1.1.7
blacklist vfat # CIS v2.0.0 1.1.1.8
blacklist usb-storage # CIS v2.0.0 1.1.23
blacklist dccp # CIS v2.0.0 3.4.1
blacklist sctp # CIS v2.0.0 3.4.2
blacklist rds # CIS v2.0.0 3.4.3
blacklist tipc # CIS v2.0.0 3.4.4
Please remove these lines:
blacklist squashfs # CIS v2.0.0 1.1.1.6
blacklist vfat # CIS v2.0.0 1.1.1.8
And check that you don't also have an entry like this:
install squashfs /bin/true
install squashfs /bin/true
cpt-master-ethos11thrashor1-890 ~ # cat /etc/modprobe.d/squashfs.conf install squashfs /bin/true
Do we have to remove it from here as well ^^
@jepio thanks a lot for quick replies..
install squashfs /bin/true
cpt-master-ethos11thrashor1-890 ~ # cat /etc/modprobe.d/squashfs.conf install squashfs /bin/true
Do we have to remove it from here as well ^^
Yes definitely. These modifications are directly responsible for the errors you are seeing. Also remove anything that says this:
install vfat /bin/true
May I ask why you have these config files?
This is because of the CIS standards we are following CIS-1.1.1.6 Ensure mounting of squashfs filesystems is disabled
Can you share more? How could I validate myself what change this CIS standard is requesting? And are all of these changes manually applied by you or is some tool generating the configs?
Please be careful with this kind of hardening approach, there may be more things here that subtly break your system.
Description
We are migrating our k8s workers to flatcar 3815.2.0; however, we found that boot.mount service fails in case the VM gets rebooted:
Impact
This is impacting other services like systemd-boot-update or systemd-sysext and they are failing too which is turn making the node as NotReady after reboot
Flatcar version information:
Environment and steps to reproduce
Expected behavior
boot.mount should be running after restart
Additional information
Please add any information here that does not fit the above format.