RJVB / afsctool

This is a version of "brkirch"'s afsctool utility that allows end-users to leverage HFS+ compression.
https://brkirch.wordpress.com/afsctool
GNU General Public License v3.0
187 stars 18 forks source link

Failed to build on M1 - Monterey 12.0.1 with Xcode 13.1 #47

Closed hstriepe closed 2 years ago

hstriepe commented 2 years ago

Build on Intel with Mohave worked fine. The referenced make command is not quite correct, it has to be run from the root directory.

But when I was trying to build on Monterey M1 12.0.1 with Xcode 13.1 it failed. Cmake was happy with prerequisites. Brew installs ARM64 in a different location. See attached for build errors, they do not related PKGBUILD.

I presume this is currently not set up for universal anyway.

Build log.txt

Originally posted by @hstriepe in https://github.com/RJVB/afsctool/issues/46#issuecomment-977014935

RJVB commented 2 years ago

I recall seeing those errors about ::isinf etc. and finding a fix before, just long enough ago to have forgotten about it.

Brew install locations on ARM64 are different (sure... why am I not surprised) but from what I understand they're picked up by CMake, so I don't have to update my build instructions for that. I do not understand why my instructions for using an in-tree build directory didn't work for you. They should; this approach has always been the preferred way to build using CMake.

hstriepe commented 2 years ago

The reason why homebrew installs in /opt/homebrew on M1 is that the code is ARM64. This way you can have a Rosetta2 version in /usr/local/ - which I do using an ibrew alias. In-tree did not work on either system, but doing it from root was ok on Intel. Got a good build. Since I presume it would still be i86_64 on M1, it does not make a different.

RJVB commented 2 years ago

See https://github.com/RJVB/afsctool/issues/37#issuecomment-754994785 :

Once I finally RTFM and exported PKG_CONFIG_PATH=/usr/local/opt/zlib/lib/pkgconfig, my "no member named 'xxx' in the global namespace" errors were gone!

You do need to set PKG_CONFIG_PATH . If HB/arm64 installs to /opt/homebrew, you'd probably have to do export PKG_CONFIG_PATH=/opt/homebrew/opt/zlib/lib/pkgconfig

(you'll need to verify if that's indeed the proper path, and not e.g. /opt/homebrew/zlib/lib/pkgconfig - the double /opt seems a bit unexpected to me.)

Please report back if this fixes your build error.

hstriepe commented 2 years ago

I am not sure what changed, but I went OCD and made another attempt that worked.

PKG_CONFIG_PATH is now path of my zsh configuration files. I deleted the repo and started over. Everything worked to as described including building form the build directory and the results is an ARM64 binary! Errors along the way might have screwed with the repo.

Thank you for your help, this is great.

You might want to add the ARM64 instructions into the README.md and point out that it builds natively.

Question: is this meta tag on a file or directory basis? Would a tagged directory compress all additions or do I have to periodically iterate over changed files?

I suspect it is the latter and I might set up a daemon task driven by a configuration list of directories, when I have time,

My log is attached FYI afsctool_build_log.txt I.

RJVB commented 2 years ago

PKG_CONFIG_PATH is now path of my zsh configuration files.

Is that a HomeBrew-related path that would work for others too (with a stock HB install)?

You might want to add the ARM64 instructions into the README.md and point out that it builds natively.

Why do you think I asked about what value to set PKG_CONFIG_PATH to? ;)

Question: is this meta tag on a file or directory basis? Would a tagged directory compress all additions or do I have to periodically iterate over changed files?

File. HFS compression doesn't work like NTFS or even ZFS compression; it's a post-hoc operation except for a handful of applications that know how to write compressed files.

However, you can just feed the directory to afsctool and it will compress only the files it can compress. Just be aware that files are replaced so doing this often isn't necessary kind to your SSD or to the fragmentation level on your HDD. (The latter should be alleviated a little bit by giving the option -S which causes the files to be compressed in smallest-to-largest order (thus increasing the chance that small bits of free space are filled first, freeing up space that can then be used by the larger files).

hstriepe commented 2 years ago

Is that a HomeBrew-related path that would work for others too (with a stock HB install)?

Yes, it's the default install location for HOMEBREW on ARM. Brew install its files in /opt/homebrew on Apple Silicon. Their installs are NOT universal.

I ended up creating a universal afsctool using lipo. Makes it easier to handle across the multiple systems I use. I think a periodic daemon checking a list of directories should do the trick.

Does afsctool recurse through a list of directories from the top down or does it stay at one level? This would be just the thing for repos. Do you know whether APFS groups small file "fragments" into one block or is there a minimum storage allocation? This would impact a repo with hundreds of small header files. I have googled but ran out of time to find the real answer.

BTW, Latenightsoft used to have a Preferences Pane called Clusters that has been discontinued. It had a very nice UX for this sort of thing. We should ask them to Open Source this and could plug in your code.

Latenigtesoft Clusters

gingerbeardman commented 2 years ago

I added my own small script as a daemon that runs afsctool on new files added to my /Applications and other directories, it works well and I've been running it for years now.

A GUI/prefpane would be nice for greater adoption, though support issues would of course also increase. I guess that's why LateNiteSoft charged for Clusters (I used to use it as a paid customer) and when they introduced a bug that resulted in loss of data they closed doors.

hstriepe commented 2 years ago

Based on otool you are not dynamically linked against any brew libs. But building universal out of the gate would be a PITA based on the way homebrew is set up.

RJVB commented 2 years ago

?? If your build went as it should you should have a binary that links to libraries from HB if that's what you asked for (in the latest version that would probably just be libz.dylib).

I have no particular interest in building universal but you're right that it'd be complicated to set up at the CMake level for HomeBrew. You'd need an approach like MacPorts's muniversal. Fortunately this project builds only a single executable that's of general interest, so running lipo by hand (or writing a build script) shouldn't be a big deal.

hstriepe commented 2 years ago

?? If your build went as it should you should have a binary that links to libraries from HB if that's what you asked for (in the latest version that would probably just be libz.dylib).

It looks like it is statically linked against HOMEBREW *.a libs, dynamically only against the system. You probably already know MACH-O has relative or absolute paths that can be displayed.

harald@triton bin % otool -L afsctool afsctool: /usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.11) /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices (compatibility version 1.0.0, current version 1122.11.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1292.60.1) /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 1770.255.0)

Otherwise glueing a Universal would be senseless, since it would require the homebrew dylibs linked against in their respective locations. Of course, you can patch the name references to be relative to the executable and include the dylibs, if they in turn are not linked against other non-system libs.

RJVB commented 2 years ago

Otherwise glueing a Universal would be senseless, since it would require the homebrew dylibs linked against in their respective locations.

A priori no, and yes.

A UB is just an archive that contains binaries for a number of supported architectures from which the dynamic loader chooses the appropriate binary. I wouldn't be surprised if you could create something that has completely different binaries for each architecture.