Open xcvista opened 4 years ago
On Tue, Aug 04, 2020 at 12:43:52AM -0700, Max Chan wrote:
Is it possible to mount an APFS Fusion Drive using this yet? You may want to look into how
bcache
implemented the binding of two devices into one for its cache system.
I'm afraid not. I do intend to support it eventually, but it's only one of many missing features and I've been too busy lately, so it could take a while.
If you need this urgently, you should know that the fuse driver seems to support it already: https://github.com/sgan81/apfs-fuse
p.s. This is AFAIK the only Linux kernel filesystem module that supports easy automatic tiered filesystem, and it also have good feature parity with
btrfs
(too badbtrfs
lacked tiering - it even struggles on top ofbcache
.) When this matures to the point of supporting both read/write access and Fusion Drive, I would seriously consider migrate my Linux boxes to this, as all of them have hybrid SSD and HDD setups, andbcache
is wasteful for medium capacity SSD's.
I'm glad to hear that you are interested in the project, but I don't think this will be practical any time soon. Much of the write support is reverse engineered and filesystems are always delicate, so it could take years before this module is reliable enough to be used as root, if that ever happens.
Continuing on what eafer said, it's not really "safe" to write to a filesystem if it's proprietary. That's why NTFS support on linux is so limited--we've had over 2 decades to RE it and have pretty good support, but I haven't seen any single driver/module/program that mounts NTFS with rw support, at least on Arch Linux. I believe there is one for some other distro, but even that is still limited. It's just because we don't know the EXACT way Windows does it, so it's hard to know if it even works, and it could screw the whole drive up.
So take that and apply it to APFS. Existed for <5 years or something, and the FS for a much less-used system. Because of the limited amount of care for it and it being so young, things like this are still SUPER experimental.
So TLDR reverse engineering something like a filesystem is really hard, and dangerous too. Since it's not the exact intended way to do things, writing to them is dangerous. Yeah.
Not to contradict you, but there's a commercial product that provides native-speed read-write support for NTFS, HFS+ and now APFS, written by Paragon. I've been using their free Linux version of the NTFS+HFS kmod for years now and never had any real issues with it (in terms of filesystem corruption). Their HFS+ implementation doesn't support compression, I don't know about the APFS driver.
I've never heard of tiered filesystems, isn't that something you could do with ZFS? ZFSonLinux is stable and widely used. It can definitely be set up to combine the strengths of SSDs and HDDs.
I've never heard of that. I've got a APFS filesystem available, if you could provide a link that'd be cool
Here you go:
I've never heard of tiered filesystems, isn't that something you could do with ZFS? ZFSonLinux is stable and widely used. It can definitely be set up to combine the strengths of SSDs and HDDs.
@RJVB ZFS requires a lot of configuration, and is not bootable AFAIK even with an ext4 /boot
partition. APFS tiering is pretty much painless and configuration-free.
On Friday August 07 2020 20:09:30 Max Chan wrote:
@RJVB ZFS requires a lot of configuration
Only as much as you need. Maybe ask around a bit on #zfsonlinux? Very active community and lots of very knowledgeable people.
, and is not bootable AFAIK even with an ext4
/boot
partition.
Well, I must be living in an illusion for 5 years now :)
> df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 3.9G 12K 3.9G 1% /dev
tmpfs tmpfs 790M 1.4M 789M 1% /run
bolaLNX zfs 100G 48G 52G 48% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
bolaLNX/tmp zfs 52G 384K 52G 1% /tmp
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 3.9G 149M 3.8G 4% /run/shm
none tmpfs 100M 24K 100M 1% /run/user
<snip>
Is it possible to mount an APFS Fusion Drive using this yet? You may want to look into how
bcache
implemented the binding of two devices into one for its cache system.p.s. This is AFAIK the only Linux kernel filesystem module that supports easy automatic tiered filesystem, and it also have good feature parity with
btrfs
(too badbtrfs
lacked tiering - it even struggles on top ofbcache
.) When this matures to the point of supporting both read/write access and Fusion Drive, I would seriously consider migrate my Linux boxes to this, as all of them have hybrid SSD and HDD setups, andbcache
is wasteful for medium capacity SSD's.