MichaIng / DietPi

Lightweight justice for your single-board computer!
https://dietpi.com/
GNU General Public License v2.0
4.77k stars 494 forks source link

RPi | Support for + provide F2FS images #606

Open trajano opened 7 years ago

trajano commented 7 years ago

Is it possible to set it up so the boot disk is formatted as F2FS rather than ext4?

https://en.wikipedia.org/wiki/F2FS

I am checking this out to see if I can just convert my existing one. http://whitehorseplanet.org/gate/topics/documentation/public/howto_ext4_to_f2fs_root_partition_raspi.html

rekcodocker commented 1 year ago

Update: Tried to flash the card and without booting, resize the f2fs partition. All on my laptop.

First tried with KDE partition editor. It resizes the partition but not the filesystem. Same error: The partition grows to 14.7GB. But the filesystem does not resize. The filesystem lives on a 14GB partition but still thinks it is 890MB in size.

Tried a cleanly flashed SD card again. Tried the resize.f2fs from the commandline. No error but no resize either: It reports it is 890MB in size (which is correct as I have not enlarged the partition).

I am at a loss; I don't know how to resize this filesystem.

MichaIng commented 1 year ago

Does it not resize automatically on first boot? The code for that is in place.

After resizing the partition, you need to inform the kernel about it:

partprobe /dev/mmcblk0
partx -u /dev/mmcblk0

Two methods, just to be sure.

rekcodocker commented 1 year ago

It should have... but it didn't. The partition was resized but the filesystem was not. I also could not resize the filesystem on my laptop. See previous post for the output of the resize.f2fs command.

Workaround for now:
I rsynced all the files from the filesystem, deleted the fs, created a new f2fs and rsynced all the files back to it.

It is up and running on f2fs. Have installed docker-compose and nodered, and I will continue testing.

rekcodocker commented 1 year ago

Ok, tried again. Clean SD card, 16GB.

After initial login, it updates and installs minimal software. Then I get to the command prompt.

I see this: df -h -> indicates the root filesystem op /dev/root is 890MB in size, 139M available which is 85% used.

parted, show partition information: The partition is 15.8GB and the filesystem is f2fs.

Start dietpi-drivemanager. Ran filesystem repair (requires reboot) -> no change.

Start dietpi-drivemanager. Ran filesystem resize (requires reboot) -> no change. no change.

Rebooted again. -> no change.

partition = 15.8GB, sounds resized. Filesystem is 890MB, is not resized.

It is not a failure to re-read the partition info. It persists after reboot.

Did you read where I got the same numbers when I inserted the SD card on my laptop? And I also tried 'resize.f2fs' on my laptop? It did not work. It did not grow the filesystem into the resized partition. If it didn't work on my laptop, maybe the same error happens in dietpi.

It is not re-reading the partition table. The resize of the partition works. The resize of the FS does not work.

It is not caused re-reading the partition info. If it was, I could insert the SD card in my laptop and see a filesystem of 15GB. Instead I see 890MB.

MichaIng commented 1 year ago

fsck cannot resize a filesystem, that is expected. So resize.f2fs failed. Can you show the output of:

cat /var/tmp/dietpi/logs/fs_partition_resize.log
resize.f2fs "$G_ROOTFS_DEV"
mount -o remount,ro /
resize.f2fs "$G_ROOTFS_DEV"
mount -o remount,rw /
rekcodocker commented 1 year ago

I can't run these on the pi I think? I can't resize an active partition. And I can not unmount the partition and then run resize.f2fs - because it's on the partition I just unmounted.

But I got this output when I tried to resize it on my laptop:


sudo resize.f2fs /dev/sdb2 
Info: [/dev/sdb2] Disk Model: STORAGE DEVICE  
Info: Segments per section = 1
Info: Sections per zone = 1
Info: sector size = 512
Info: total sectors = 30845952 (15061 MB)
Info: MKFS version
  "Linux version 5.15.0-1031-azure (buildd@lcy02-amd64-010) (gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #38-Ubuntu SMP Mon Jan 9 12:49:59 UTC 2023"
Info: FSCK version
  from "Linux version 5.15.0-1031-azure (buildd@lcy02-amd64-010) (gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #38-Ubuntu SMP Mon Jan 9 12:49:59 UTC 2023"
    to "Linux version 5.15.0-58-generic (buildd@lcy02-amd64-101) (gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023"
Info: superblock features = 0 : 
Info: superblock encrypt level = 0, salt = 00000000000000000000000000000000
Info: total FS sectors = 1826816 (892 MB)
Info: CKPT version = 223718b2
Info: Duplicate valid checkpoint to mirror position 1024 -> 512
Info: Write valid nat_bits in checkpoint
        Error: Device size is not sufficient for F2FS volume, more segment needed =12534

Can you replicate this? I got this three times on two separate SD cards (same type). Or does this work for you?

MichaIng commented 1 year ago

Um: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1002034 Looks like solved in Bookworm but was never solved for Bullseye 😞.

I'll generate a Bullseye image.

rekcodocker commented 1 year ago

Nicely found, well done!

In de meantime, my manually resized image is running happily right now.

Impressed that you could so quickly generate a working F2FS image, truely impressed. I take it you have this automated because it was so quick. I have several Raspberries running on an F2Fs image which I had to convert from Ext4 first - tedious process.

Is it an option to include this in the builds and also the download options? (with the tag 'testing' attached?)

Is it an option (and this is a long standing wish of mine, I was never able to create it myself) to have zstd-compression enabled on F2FS to improve performance of the SD card?

MichaIng commented 1 year ago

I take it you have this automated because it was so quick.

Yes exactly. We do now have build scripts, trigger and run them via GitHub Actions. General support for F2FS, as said, as been added already a while ago. Not sure why I forgot to mention it here 😅.

Is it an option to include this in the builds and also the download options?

Yes I can add this to the overall build list we trigger after a DietPi release. We can add them to the download page as well. But not sure how to deal with the F2FS bug. There are no backports and sadly the package from Bookworm depends on glibc 2.34, hence cannot be installed on Bullseye. That after flashing, expansion needs to be done from a remote system with sufficiently recent f2fs-tools doesn't sound so great. We could however provide Bookworm images only. As F2FS is not well tested, having it combined with Debian testing should be fine 😄? Bookworm has reached first freeze stages and will be released this August, so it is mostly fine, runs in production on my home server and dietpi.com server as well.

Is it an option (and this is a long standing wish of mine, I was never able to create it myself) to have zstd-compression enabled on F2FS to improve performance of the SD card?

This is a mount option, I guess? Does it automatically compress existing data when set, or only newly written data? I generally wanted to add custom mount option support to dietpi-drive_manager, but there is a larger task needed for this script, to not overwrite the whole /etc/fstab but only the individual entries that you edit. But

to have zstd-compression enabled on F2FS to improve performance of the SD card?

This would reduce performance on the SD card, not improve it, since compression/decompression needs to be done on every write/read action, or do I misunderstand something?

rekcodocker commented 1 year ago

If you add it, it should definitely come with a warning label and a 'how to work around the issue' note. It is really unusable as people will run out of diskspace almost immediately and they will start filing bugs.

However, this may help. Te bug seems to have occurred in 2021 since the mentions of that bug are on different forums by that date.

Here is an interesting remark: https://bugs.archlinux.org/task/71801 says that they found the error occurs in f2fs-tools 1.14.0-2, and downgrading to 1.14.0-1 would not cause the error. If that works, you can fix the version of the package on 1.14.0-1 for the time being, and return to the latest when the issue is fixed.

Compression: It compresses only new data. So you have to do this when you create the image to reap the full benefits.
I have tried this since f2fstools 1.14 came out supporting on the fly compressopn. My problem was that I could not get it to boot. If you are interested to look into it, I took clues from here: https://wiki.archlinux.org/title/F2FS . It states that you need to use a specific option when creating the filesystem as well as when you mount it. Apparently at creation time you can state -O extra_attr,compression,compress_algorithm=zstd and when you mount it: -o compress_algorithm=zstd:6,compress_chksum

Note the inconsistencies here: When creating you use a capital 'O', the options are preeded by 'extra_attr' and they include 'compress' as well as 'compress_algorithm' The mount options are slightly different.

MichaIng commented 1 year ago

I saw the Arch bug report. However note that 1.14.0-2 on Arch isn't equal to 1.14.0-2 on Debian. These are build suffixes. F2FS was on version 1.14.0, and Arch as well as Debian released a second package based on the same source, but with changes to build flags, distribution/package files or such. And I guess it was fixed upstream with v1.15.0 and Arch backported the fix into their 1.14.0, adding another packaging suffix increment. So as long as Debian doesn't do the same, we won't see this fix in Bullseye.

-O/-o is btw expected. These are two different things:

It takes another iteration to enable these features for our images, respectively for dietpi-drive_manager in the first place.

rekcodocker commented 1 year ago

Cool here I was thinking I was telling you anything you didn't know... I will keep using the F2FS image that I got working, it will be in a 24/7 setting running a zigbee controller, mosquitto, node-red and a unifi-controller in docker. Possibly it will also run a Wireguard server. For a Raspberry 3, that is busy enough.

I like the lean Dietpi image, the impressive toolset and the software-set. I also love that it has been tailored to a Raspberry Pi, that is: slimmed down, with zram-swap and ram-logs; I don't understand that RasbianOS does not provide this out of the box. I'll try out some of the software packages, very interesting.

For now, thank you very much for your quick answers and the image.

rekcodocker commented 1 year ago

update, it's 22-2 and all is well. I have a dietpi based on F2FS which is running Nodered, Zigbee2Mqtt, Mosquitto mqtt broker and PiHole dns blocker.

Some remarks not specifically related to F2FS:

MichaIng commented 1 year ago

Normally the zram-configfile is in /etc/default.

We do not use zram-tools, which is what this config is for.

PiHole as installed by DietPi writes too much to SD for my taste

Still less then vanilla Pi-hole, isn't it?

This can be configured to write infrequently

How can this be configured?

tmpfs - in my opinion it's perfectly fine to lose this data at reboot.

By default, Pi-hole keeps 1 year of logs. We also think that such long-term logs are rarely used and do not satisfy maintaining a GiB large single database file, which also makes any lookup take very long (on slow hardware). The 48h are a compromise so that users are still assured to see logs from last days whenever accessing the Pi-hole dashboard. Instead of moving them to tmpfs, you should better disable the long-term query log completely, if you don't need this. The last 24h for dashboard gauges are kept in a different store in RAM anyway, if I remember right.

mosquitto out of the box could be configured without logging by default

But it logs to journal (RAM) only, isn't it? Without logging isn't good if issues appear, but is it possible to reduce log level to warnings or errors, but hide common communication logs?

rekcodocker commented 1 year ago

Sorry for the late response.

In /etc/pihole/pihile-FTL.conf you can add these lines

DBFILE=/tmp/pihole-FTL.db
LOGFILE=/tmp/pihole-FTL.log

This makes the FTL database survive a restart, though not a reboot (I find this not a problem at all. It just stores data for 2 days to display on the dashboard. Nothing that requires to survive a reboot IMHO).

MichaIng commented 1 year ago

The 24h short-term logs do also survive a service restart, isn't it? If those 48h logs do not survive a reboot or crash because they are moved into a tmpfs as well, doesn't it make more sense to disable "long-term" logging completely and just use the dedicated 24h logs?

I thought you mean that there is a way to reduce the frequency in which long-term logs are actually written to the database on disk, like a larger Pi-hole side write buffer on top of the smaller native filesystem write buffer. However, this really only matters if you have spinning disk and all other frequent writes to it reduced as well, so that it can go into idle/spindown mode. On an SD card, the native filesystem/disk write buffer should be sufficient to minimise write overhead when data packets are smaller than filesystem blocks or disk sectors.

However, we are actually completely off-topic as this is about F2FS images. There only left problem is:

rekcodocker commented 1 year ago

We are going off topic (perhaps open a separate issue), but you are of course right. Everything is configurable.

But when I looked in to that I could find no interval at which I would like a statistics-only database to be written to my precious SD card.

It's nice to have statistics, but they are reliable enough when they're on tmpfs.

In case of a - very uncommon - reboot, I would start collecting stats immediately and have full statistics again after 24h. And that is only in the case of a reboot. That is simply no problem to me.

rekcodocker commented 1 year ago

Update: My testsystem has an uptime of 98 days today. It runs Zigbee2mqtt, node-red and PiHole.

A dump of dmesg shows no mention of any f2fs issue (except for when it was mounted at boot).

Shut it off and ran an fsck.f2fs. No errors or warnings.

Rebooted - no errors/warnings in dmesg.

TL;DR : July 16 and all is well.

MichaIng commented 1 year ago

Thanks. I really only need to find time to implement F2FS filesystem expansion on first boot.

rekcodocker commented 1 year ago

... and then find out if inline compression can be used.

I have not gotten that to work - F2FS needs to be created with that option as well as mounted with it; preferably during the installation.

It's not necessary for this current issue but it would allow us to cash in on a tripple bonus:

MichaIng commented 1 year ago

But mostly: It would improve read/write performance on the SD card and Pi-performance overall.

You are aware that compression and decompression costs CPU time and hence decreases performance? If the bottleneck is the SD card R/W speed, then you may be right, at least for newer RPi models. However, on newer RPi models, one can and should use a USB stick/drive to overcome the performance and lifetime issues of SD cards in general 😉. So yeah, it is good to have compression as option, probably a pre-compressed image, but not as only or default option.

eadmaster commented 1 year ago

it is good to have compression as option, probably a pre-compressed image, but not as only or default option.

rather than providing alternative images which may confuse some users, i think the best solution could be a script that converts the root filesystem from ext4 to f2fs (either live or statically before flashing if not possible).

MichaIng commented 1 year ago

rather than providing alternative images which may confuse some users, i think the best solution could be a script that converts the root filesystem from ext4 to f2fs (either live or statically before flashing if not possible).

I don't think that is possible either way:

Given all the hassle and risk, I think it is much better to provide additional images. But surely they should not all be listed on the main download page, as there are already too many RPi images. My idea is to have an extra section in our docs about alternative images and a link to the raw download directory from there.

eadmaster commented 1 year ago

what about this procedure?

  1. resize the current ext4 parition to the min size
  2. create a new f2fs partition in the remaining space
  3. copy all the files from the ext4 rootfs to the new f2fs partition
  4. set the f2fs partition as bootable
  5. reboot and reformat the ext4 partition (could be reused for user data)

I've also found this tool that may help: https://github.com/cosmos72/fstransform

MichaIng commented 1 year ago

About the tool, as I expected:

Transforming the current root directory does not work. For that, you should boot from a different installation (for example a live CD, DVD or USB).

On your idea before you edited the post: Depending on how data is physically located on the disk, shrinking the filesystem may not be possible or not sufficiently possible, independent of the disk size. Doing it with the help of a second drive is already possible via dietpi-drive_manager on RPi. The bootable flag is btw not required nowadays, especially not on the root filesystem on RPi, which boots from the /boot FAT filesystem.

eadmaster commented 1 year ago

On your idea before you edited the post: Depending on how data is physically located on the disk, shrinking the filesystem may not be possible or not sufficiently possible, independent of the disk size.

If the available free space is not sufficient, the procedure will simply fail at step 3 and restore the partitions like they were before step 1.

Rootfs takes less than 1GB in my NanoPiR1 image, so i expect even a 2GB microsd should be sufficient.

MichaIng commented 1 year ago

If the available free space is not sufficient, the procedure will simply fail at step 3 and restore the partitions like they were before step 1.

At least that part is simple via resize2fs. No need to touch the partition, just resize2fs -M /path/to/device, check whether the size has been reduced sufficiently, else max out again via resize2fs /path/to/device. However, this fails for the same reason on the very first step: resize2fs can expend mounted filesystems, but it cannot shrink them. And you cannot unmount the root filesystem.

Rootfs takes less than 1GB in my NanoPiR1 image, so i expect even a 2GB microsd should be sufficient.

If we provided a tool, admins could use if after having several software installed, so we cannot count on that.

However, as said, as far as I currently see, it cannot be done for two similar reasons: One cannot shrink mounted filesystems with resize2fs and one cannot convert the root filesystem with tstrasform. So the easiest is probably to use fstransform (or any other method) on a second Linux system. But instead of downloading one image, attached it to another system, transform it, attach it to the final system, it is much simpler and safer to just download an alternative image which is formatted correctly OOTB.

rekcodocker commented 1 year ago

it is good to have compression as option, probably a pre-compressed image, but not as only or default option.

rather than providing alternative images which may confuse some users, i think the best solution could be a script that converts the root filesystem from ext4 to f2fs (either live or statically before flashing if not possible).

and @MichaIng :

I have tried this. There exists an Ext4 to F2fs conversion script for the Raspberry pi. It makes a copy of the filesystem, then boots from it, creates a new F2FS filesystem, and copies the contents of the filesystem back to the new F2FS system. It uses an external USB drive for temporary storage. It is clever, but also time consuming and requires several reboots during the conversion.

I would not recommend it as a supported procedure. Main reasons:

rekcodocker commented 1 year ago

But mostly: It would improve read/write performance on the SD card and Pi-performance overall.

You are aware that compression and decompression costs CPU time and hence decreases performance? (...)

It'd be interesting to test. On a single core Raspberry, say read speed is about 15-30 megabytes per second. Write speed is worse. And LZ4 and ZFS are fast. On multi-core systems, one core could do the reading while others could decompress - I expect it to be an all-round improvement.

Wear&tear still stands - I did not know that the advice was to use USB. Not sure if I will do that. USB pokes out so I am always inclined to use this tiny micro SD card in its little safe slot.