Open agowa opened 6 years ago
Hello @agowa338, thanks for coming here. Can you make a working proof-of-concept?
@agowa338 here is some useful doc https://www.systutorials.com/5217/how-to-statically-link-c-and-c-programs-on-linux-with-gcc/
Recommend using musl libc instead of glibc, for various reasons like License (MIT ./. LGPL), binary size (527k ./. 8M).
musl doesn't implement the entire feature set of glibc.
Since AppImage has no plan to support it
What feature have we stated to not support? This issue is about statically linked binaries. Since this is something to be done on build time and the tools to make AppImages are used with already built binaries, so I don't understand what your criticism here is.
Still don't fully understand. Just trying to clarify things here so future readers know what we all mean.
What you mean is the AppImage runtime. It is linked dynamically to glibc because it has to be linked to FUSE dynamically to be compatible to the system FUSE implementation.
If we ever get rid of FUSE, we can make a completely static runtime. But right now, I don't think this is going to work out. But you're happily invited to make a case study and prove me wrong!
The next AppImage type should fix this issue.
Good feature. Since AppImage has no plan to support it
Actually we are very interested in supporting this, as it would allow us to close https://github.com/AppImage/AppImageKit/issues/1015 - correct? Let's collaborate :+1:
The next AppImage type should fix this issue.
Shall we state "get rid of FUSE" as a goal?
can't run without glibc such as Alpine
Do you think we can change it just so much that it can at least work on Alpine when libc6-compat is installed there? Then it would not even have to be fully static. See https://github.com/AppImage/AppImageKit/issues/1015
A completely static linked app would also allow to create docker images without userland. E.g. Only the static linked app without any linux userland surrounding it...
That's not only smaller, but also decreases the attack surface.
@TheAssassin would that be something that you think would be doable if we would rewrite the runtime in, say, Rust? Wouldn't the runtime be rather large then because it would have to statically link libfuse? (How large would it become?)
Or should we try to get rid of FUSE altogether for the future type 3 AppImages?
@TheAssassin would that be something that you think would be doable if we would rewrite the runtime in, say, Rust? Wouldn't the runtime be rather large then because it would have to statically link libfuse? (How large would it become?)
Or should we try to get rid of FUSE altogether for the future type 3 AppImages?
You don't need the entire libfuse, you just need a few bits. I've read a bit into fuse-rs, it doesn't seem that complex to me.
The size is secondary; we can save bloat elsewhere (e.g., by using musl libc properly thinned down to the essential bits, etc.).
Getting rid of FUSE would be awesome, but I have doubts it's all that easy.
Is there any limitation in runtime size?
No. No hard limitation. (We should try to make it as small and efficient as possible.)
-----------------------
static linked loader (ELF)
-----------------------
squashfs image
-----------------------
should be sufficient since we can calculate the length of an ELF (and we are already doing it).
By the way, here is a bare-bones static AppImage type 2 runtime written in Go:
https://github.com/orivej/static-appimage
This runtime is using zip rather than squashfs. It has the added benefit that any existing unzip tool should be able to extract it. (Maybe such AppImages should be named .AppImage.zip
to make this more obvious.)
Don't use it for production yet since it may be lacking more advanced features like update information, embedded digital signatures, and such. But it shows that it is doable to make a static AppImage runtime using FUSE.
Go? how large is the final static runtime?
Depends on the architecture, around 2 MB: https://github.com/kost/static-appimage/releases
When you run upx -9
on it, you can bring it to under 1 MB.
That other project is doing is really different from us. It's hardly comparable to our runtime. Any sort of size estimation based on that is too imprecise to tell anything useful. Given their runtime is already way larger than ours doesn't really aid your point.
A fully statically linked FUSEless runtime would be great. But I don't see how this can be realized while keeping all the features and characteristics of the existing runtime.
Writing a runtime in Go is also pretty much a bad idea. It adds way too many uncontrollable dependencies. It's a huge mess. Our runtime is embedded in every AppImage. It needs to be absolutely bullet proof license wise. Ideally, it's licensed as permissively as possible, as legally we cannot even safely assume the the resulting AppImage is not considered a derivative work derived from the runtime. This question hasn't been fully answered for the existing runtime. (Generally, any upcoming AppImage type needs to put a way higher effort into licensing questions.)
Or maybe a completely different approach can be taken:
Provides a modified version of glibc and musl libc that have appimageRuntime embedded into it by modifing functions _start
, dlopen
and open
(optional).
The _start
is modified so that the embedded appimageRuntime can parse the cmdline arg and setup environment (unzip the files the tmpfs and etc).
The dlopen
and ld.so
is modified so that the dynamic libraries would search in the unzipped environment first.
The open
can be modified if the program is close source so that read from absolute path /usr
for resources can be redirected to the unzipped environment.
If the program cannot be compiled to use this libc
or is a shell script, then a more traditional approach can be used:
Add a header to the program that contains a runtime that decompress the environment including the modified libc to tmpfs and setup environment variables LD_PRELOAD
and LD_LIBRARY_PATH
so that the program/shell would use the modified libc and the bundled dynamic libraries.
The libc then can have its open
function modified so that resource be loaded from the unzipped environment.
Edit:
I found that the interpreter and rpath of ELF can be changed by NixOS/patchelf, so there is no need to use LD_PRELOAD
and LD_LIBRARY_PATH
for close source software unless it forbidden any modification to the binary.
unzip the files
Unzip which files? AppImages are mounted, not extracted. This gives them their speed.
Unzip which files? AppImages are mounted, not extracted. This gives them their speed.
I was suggesting to throw away fuse and use a compressed tar instead.
Extracting a compressed tar won't be a lot slower than squashfuse while fuse adds overhead to application.
Every read/mmap of the executable or resource bundled with appimage need to go through fuse, which requires the process to wait for at least 2 context switch instead of just one.
@probonopd I've done a naive benchmark between squashfuse used in appimage and tmpfs using nvim.appimage
[nobodyxu@gentoo:/tmp]$ time tar cf squashfuse .mount_nvim.aDB5FO5/
real 0m0.166s
user 0m0.004s
sys 0m0.038s
[nobodyxu@gentoo:/tmp]$
[nobodyxu@gentoo:/tmp]$ time cp -r .mount_nvim.aDB5FO5/ copied_tmp
real 0m0.040s
user 0m0.004s
sys 0m0.029s
[nobodyxu@gentoo:/tmp]$ time tar cf tmp copied_tmp/
real 0m0.023s
user 0m0.004s
sys 0m0.019s
[nobodyxu@gentoo:/tmp]$ time cp -r copied_tmp/ copied_tmp2/
real 0m0.025s
user 0m0.004s
sys 0m0.021s
.mount_nvim.aDB5FO5
is where the nvim.appimage
is mounted.
I found that by looking into /proc/<pid>/
.
You can see that operations performed on tmpfs is much faster than squashfuse.
Edit:
The benchmark above test the cold run.
The warm run is much faster, but still slower than tmpfs:
[nobodyxu@gentoo:/tmp]$ tar cf squashfuse .mount_nvim.ax0xNEd/
[nobodyxu@gentoo:/tmp]$ rm squashfuse
[nobodyxu@gentoo:/tmp]$ time tar cf squashfuse .mount_nvim.ax0xNEd/
real 0m0.035s
user 0m0.005s
sys 0m0.023s
[nobodyxu@gentoo:/tmp]$ time tar cf squashfuse .mount_nvim.ax0xNEd/
real 0m0.034s
user 0m0.012s
sys 0m0.016s
If I understand it right, it Looks like https://github.com/eth-cscs/spack-batteries-included is providing a solution for this. Should we backport these changes into the AppImage runtime?
Differences and improvements over AppImage runtime spack.x uses zstd for faster decompression; spack.x itself is an entirely static binary; spack.x does not need to dlopen libfuse.so
Reference: https://github.com/AppImage/AppImageKit/issues/1120#issuecomment-1060331710 cc @haampie
For those interested in running AppImages in musl containers like me (namely, those based on Alpine), a solution that works today is to extract the AppImage to the container filesystem while building it (for example, with a COPY
instruction on the Dockerfile, after running ./Whatever.AppImage --appimage-extract
).
If the AppImage was generated with a tool like appimage-builder
, which bundles every dependency to the AppImage (including the glibc used by the payload), the resulting AppRun
should work flawlessly on pretty much anything you throw at it.
The idea stated above is also applicable in any scenario in which it is feasible to extract the AppImage in a glibc system before running it on a maybe musl system.
Turns out that @haampie has implemented basically everything we always wanted:
- uses zstd for faster decompression
- is an entirely static binary
- does not need to dlopen libfuse.so; hence works on Ubuntu 22.04 which no longer ships libfuse2
Which also means:
I have tested it successfully:
Here it is: https://github.com/AppImage/AppImageKit/releases/tag/static
Seems to solve
The only question is: How can we build the static runtime without needing Spack, containers and all of that. Ideally on Alpine Linux with musl libc.
I've experimented a bit with static runtimes built in Alpine Linux with musl libc. https://github.com/probonopd/static-tools/releases
Only half the size!
Proof of concept on Ubuntu 22.04 which no longer ships libfuse2:
mkdir -p hello.AppDir/
# sudo apt install hello
cp $(which hello) hello.AppDir/AppRun
# sudo apt install squashfs-tools
chmod +x hello.AppDir/AppRun
mksquashfs hello.AppDir hello.squashfs -comp zstd
cp '/home/ubuntu/Downloads/runtime-fuse2-x86_64' hello.AppImage
cat hello.squashfs >> hello.AppImage
chmod +x hello.AppImage
./hello.AppImage
Hello, world!
ubuntu@ubuntu:~$ ls -lh hello.AppImage
-rwxrwxr-x 1 ubuntu ubuntu 570K May 2 18:40 hello.AppImage
LibreOffice proof of concept on Ubuntu 22.04 which no longer ships libfuse2:
wget -c -q https://libreoffice.soluzioniopen.com/stable/fresh/LibreOffice-fresh.basic-x86_64.AppImage
ls -lh LibreOffice-fresh.basic-x86_64.AppImage
# 259M
chmod +x LibreOffice-fresh.basic-x86_64.AppImage
#####################
./LibreOffice-fresh.basic-x86_64.AppImage
dlopen(): error loading libfuse.so.2
AppImages require FUSE to run.
You might still be able to extract the contents of this AppImage
if you run it with the --appimage-extract option.
See https://github.com/AppImage/AppImageKit/wiki/FUSE
for more information
#####################
./LibreOffice-fresh.basic-x86_64.AppImage --appimage-extract
mksquashfs squashfs-root/ LibreOffice-fresh.basic-x86_64.squashfs -comp zstd
cp '/home/ubuntu/Downloads/runtime-fuse2-x86_64' LibreOffice-fresh.basic.AppImage
cat LibreOffice-fresh.basic-x86_64.squashfs >> LibreOffice-fresh.basic.AppImage
ls -lh LibreOffice-fresh.basic.AppImage
# 242M
./LibreOffice-fresh.basic.AppImage
# WORKS :-)
This is experimental. Not the real deal. Don't use productively just yet. Still need to sort some things out. But a promising start.
This is really exciting! Will libappimage have to be updated to read from them?
We are still using squashfs, so as long as the squashfs in libappimage supports zstandard, it should work.
Turns further out that in the process of making the runtime static, unfortunately @haampie had removed functionality from it that now needs to be added back. I have started this work over at https://github.com/probonopd/static-tools/pull/23.
Experimental static AppImages at https://github.com/probonopd/go-appimage.
i don't think using go for this is a good idea, go binaries tend to link to almost the entire go runtime, making the binaries huge in comparison to C with musl libc.
Ah yes, I could have left a note that I removed / changed a few features :grimacing:.
In my experience older versions of alpine produce smaller binaries: https://twitter.com/stabbbles/status/1491806077939171339 and what I never tried is clang's -Oz, which may or may not help.
Note: even when you link libfuse statically, it will still execute the fusermount
/fusermount3
executable (and it has to, because it's a SUID binary that makes fuse work). That means the executable name is hard-coded, and for libfuse3 it defaults to fusermount3
, which may not be available for ancient linux distro's:
libfuse doesn't create a symlink fusermount -> fusermount3
in their install step, so if distro's have that symlink, it's something non-standard.
That's a really good point, and points towards a problem: "works for me".
I guess the option to just dlopen
either library (with a preference to version 3) is still in the game. It provides a smaller binary, and uses what's provided by the system (i.e., the chance for it to fail is really small).
I guess the option to just dlopen either library (with a preference to version 3) is still in the game.
not when you statically link, at least on musl. musl doesn't support combining static linking and dynamic linking via dlopen.
Do you know how well that musl glibc compat shim (edit: gcompat) works nowadays? Maybe that's an option then. IMO linking statically is not needed if that works reasonably well.
the gcompat shim is a totally unreliable hackjob, and it's not a standard musl component. static linking is the way to go.
What's your proposal to solve the issue found by @haampie? Patch libfuse3 so it will look for all kinds of fusermount
binaries?
I just ran into this exact issue today as someone reported one of my projects wasn't working on their distro that only had fuse2 installed, despite everything (fuse3) being statically linked.
Maybe static linking fuse2 is better? Modern disros with fuse3 still contain the old fusermount binary right?
@mgord9518 please try https://github.com/probonopd/go-appimage/releases/tag/continuous, they are statically linked with libfuse2.
libfuse doesn't create a symlink fusermount -> fusermount3 in their install step, so if distro's have that symlink, it's something non-standard.
That's a pity! I checked, in Ubuntu this symlink is there, but who guarantees us that it is there on all Linux distributions and will be there forever.
When I asked upstream for fusermount compatibility guarantees, I doubt that even my question was understood.
Relevant discussions:
What's your proposal to solve the issue found by @haampie? Patch libfuse3 so it will look for all kinds of fusermount binaries?
That's what I would do: Search for libfuseN (with N not being hardcoded).
@probonopd sorry I should've clarified, the issue wasn't about the runtime, it was because my project relies on a packaged binary of squashfuse which I have been compiling statically.
Note: even when you link libfuse statically, it will still execute the fusermount/fusermount3 executable (and it has to, because it's a SUID binary that makes fuse work).
are you sure the suid/root permissions part is required for the limited functionality appimages use ? if i wanted to make this work without any dependency on host-provided infrastructure, i'd lift the code of fusermount binary into libfuse so there's no need to run any external binaries.
I was suprised to find this out, too, but it seems FUSE needs a setuid binary, fusermount
, to allow non-root users to mount things.
I was suprised to find this out, too, but it seems FUSE needs a setuid binary, fusermount, to allow non-root users to mount things.
you can do a quick test whether it's needed for the appimage functionality subset: chmod -s fusermount ; run appimage ; chmod +s fusermount
It's required, even for recent linux kernels.
Maybe a simple patch is to fall back to execvp(fusermount, ...) when fusermount3 isn't in the PATH?
Testing the experimental static AppImage runtime on Ubuntu 22.04 LTS Live ISO, I wanted to turn an AppImage that needs libfuse 2 into one that doesn't, and a surprise happened.
First, let's they the original AppImage that needs libfuse 2:
./pcloud
dlopen(): error loading libfuse.so.2
AppImages require FUSE to run.
You might still be able to extract the contents of this AppImage
if you run it with the --appimage-extract option.
See https://github.com/AppImage/AppImageKit/wiki/FUSE
for more information
So, let's convert this into an AppImage that doesn't need libfuse2 in the system anymore thanks to using the experimental static runtime as follows:
# Extract the AppImage
./pcloud --appimage-extract
# Fix permissions
chmod 0755 squashfs-root/
# Create new AppImage using the experimental static runtime
wget -c https://github.com/$(wget -q https://github.com/probonopd/go-appimage/releases -O - | grep "mkappimage-.*-x86_64.AppImage" | head -n 1 | cut -d '"' -f 2)
chmod +x ./mkappimage-*-x86_64.AppImage
VERSION=1 ./mkappimage-*-x86_64.AppImage squashfs-root/
The surprise comes when I want to run this new AppImage:
./pcloud-1-x86_64.AppImage
Uncaught Exception:
Error: ENOENT: no such file or directory, open 'libfuse.so.2'
at Object.fs.openSync (fs.js:577:3)
at Object.module.(anonymous function) [as openSync] (ELECTRON_ASAR.js:166:20)
at fs.readFileSync (fs.js:483:33)
at fs.readFileSync (ELECTRON_ASAR.js:563:29)
at new DynamicLibrary (/tmp/.mount_pcloudIDhkNE/resources/app/node_modules/ffi/lib/dynamic_library.js:67:21)
at Object.Library (/tmp/.mount_pcloudIDhkNE/resources/app/node_modules/ffi/lib/library.js:45:12)
at initLibrary (/tmp/.mount_pcloudIDhkNE/resources/app/main.js:1460:21)
at Object.<anonymous> (/tmp/.mount_pcloudIDhkNE/resources/app/main.js:1627:1)
at Object.<anonymous> (/tmp/.mount_pcloudIDhkNE/resources/app/main.js:7921:3)
at Module._compile (internal/modules/cjs/loader.js:711:30)
Clearly, the AppImage itself gets mounted and starts to execute the contained Electron based payload executable. But then the Electron based payload executable itself seems to trip over the missing libfuse.so.2
.
Looks like we are not the only ones affected by Ubuntu dropping that library... any Electron experts here who know what Electron needs it for?
Workaround:
wget https://ftp.fau.de/ubuntu/ubuntu/pool/main/f/fuse/libfuse2_2.9.9-5ubuntu3_amd64.deb
mkdir tmp
dpkg -x libfuse2_*.deb tmp
cp ./tmp/lib/x86_64-linux-gnu/lib*.so* squashfs-root/usr/lib/
VERSION=1 ./mkappimage-*-x86_64.AppImage squashfs-root/
./pcloud-1-x86_64.AppImage
WORKS on the on Ubuntu 22.04 LTS Live ISO which lacks libfuse2 :+1:
As a nice side effect, thanks to using zstandard the new AppImage is still smaller than the original one, even though it is using the experimental static runtime and additionally bundles libfuse2 and related libraries inside the AppImage, too.
Maybe a simple patch is to fall back to execvp(fusermount, ...) when fusermount3 isn't in the PATH?
Ideally this would go into the upstream https://github.com/libfuse/libfuse and/or https://github.com/vasi/squashfuse project(s) so that we don't have to patch things locally.
@s-zeid just confirmed that AppImages using the experimental static AppImage runtime work on Alpine Linux which is musl libc based (e.g., the appimagetool AppImage from https://github.com/probonopd/go-appimage/releases/tag/continuous).
To clarify: the runtime seems to work on at least v3.12, but you're still using glibc-linked binaries in the payload. So:
/lib/ld-linux-x86-64.so.2
, but not a symlink at /lib64/ld-linux-x86-64.so.2
, but the payload binaries link to the latter. If I make the symlink myself, the payload runs./lib64/ld-linux-x86-64.so.2
. The runtime and payload run on these versions.In all cases:
apk add fuse gcompat
.modprobe fuse
after installing fuse.apk add fuse-openrc && rc-update add fuse
. (In some cases, this might not be needed.)gcompat
is in the community repo, so it must be uncommented in /etc/apk/repositories
.To clarify: the runtime seems to work on at least v3.12, but you're still using glibc-linked binaries in the payload.
Thanks for the clarification. In this ticket we are only concerned about the runtime indeed. Getting the payload (application) static or everything bundled (including glibc) is a different issue.
Cool roundtrip exercise, running on FreeBSD:
wget -c https://github.com/$(wget -q https://github.com/probonopd/go-appimage/releases -O - | grep "appimagetool-.*-x86_64.AppImage" | head -n 1 | cut -d '"' -f 2)
chmod +x ./appimagetool-*-x86_64.AppImage
# Let the AppImage extract itself
./appimagetool-*-x86_64.AppImage --appimage-extract
# Use the appimagetool AppImage to convert the AppDir to an AppImage again
VERSION=1 ./appimagetool-*-x86_64.AppImage --appimage-extract-and-run ./squashfs-root/
@s-zeid
you're still using glibc-linked binaries in the payload
...no more!
AppImages should run on all Linux Platforms, but currently they don't, this is because it is dynamically linked against glibc. I tried to run a AppImage on Alpine Linux and it failed because Alpine Linux is build around musl libc instead. I think AppImages should generally include all necessary dependencies and not some of them. Also adding libc would not increase the resulting size much, depending on the used libc, it may only be from 185k to 8M libc Comparison Chart. And if the binary is also stripped it can also be a much less.
AppImage should do something like this: