ChewKeanHo / AutomataCI

An open-source, redistributable, template-guided, and semi-autonomous CI infrastructure readily available your next project.
Apache License 2.0
20 stars 1 forks source link

Support AppImage #137

Open probonopd opened 10 months ago

probonopd commented 10 months ago

Description

Continuation from https://github.com/ChewKeanHo/AutomataCI/issues/126 by @corygalyna


Happy to help but the old forum has been replaced with the new AppImage forum. Please do ask questions there and our community will do our best to support you. Thanks for your kind understanding.

Expected Behavior

AppImage is available.

Current Behavior

AppImage is not available.

Associated Data Files

No response

corygalyna commented 10 months ago

+1. @hollowaykeanho, I really think this ecosystem should be supported. About the technical deficiencies, you can implement it the way Cargo & PyPi does: make it entirely optional and backpack it on-demand.

There is no doubt AppImage will damage the already statically compiled binary by its packaging system (tracked under: https://github.com/AppImage/AppImageKit/issues/877) but it's still a use case to consider (e.g. one does not need to install flathub just to use flatpak).

Since the AppImage packager is an independent service provider, as long we address the problems out clearly in the documentation, our job should be considered completed.

probonopd commented 10 months ago

There is no doubt AppImage will damage the already statically compiled binary by its packaging system

By now, we have the statically linked runtime so that using AppImage no longer makes the system need glibc. When using it to package a static binary, the only requirement on the target system is FUSE and a recent kernel.

corygalyna commented 10 months ago

the only requirement on the target system is FUSE and a recent kernel

Correct me if I'm wrong. This means for a static binary product (already not depending on anything and even operable in the absent of sysfs like Docker stretch image), after AppImage packaging, the it will now have to depend on sysfs with FUSE.

That's his first point, right?

corygalyna commented 10 months ago

Will there be a development progress where the packaged AppImage does not depends on external library (as in fuse)?

probonopd commented 10 months ago

after AppImage packaging, the it will now have to depend on sysfs with FUSE

I have never tried (in fact, never seen) a system without sysfs, so it would be worth a try.

Will there be a development progress where the packaged AppImage does not depends on external library (as in fuse)?

Yes, by using the static runtime, libfuse is no longer required (but the fusermount binary is).

corygalyna commented 10 months ago

I have never tried (in fact, never seen) a system without sysfs, so it would be worth a try.

We successfully implemented with some technologies (notably with Go).

libfuse is no longer required (but the fusermount binary is).

As in, is it possible where AppImage will be shipped without any external dependencies (which means fusermount included) in some way? It will be bloated yes but it honors the "one file" paradigm properly.

hollowaykeanho commented 10 months ago

AppImage will be shipped without any external dependencies (which means fusermount included) in some way?

Hi @probonopd, appreciate your responses.

This query is very important because AppImage packager is altering the customer product (for musl and statically compiled case). It's the same case .rpm packaging system illegally stripping binary by default but we managed to disable it.

A roadmap of such development is also acceptable.

hollowaykeanho commented 10 months ago

Also, regarding cross-compilations, can AppImage runs on non-amd64 linux system (e.g. aarch64) or is it a strictly amd64 architecture?

Supporting these 2 pipelines:

  1. Everything pack n 1 single AppImage then wrap with a platform identifier initiator script (e.g. filename-universal.appimage)
  2. Multiple appimage per platform (e.g. filename-amd64.appimage, filename-arm64.appimage)
corygalyna commented 10 months ago

Supporting these 2 pipelines:

I believe (1) is no and (2) is yes, if this page is valid: https://github.com/AppImage/AppImageKit/wiki/Creating-AppImages/cc2441518975caca23e9ce2dba6f08a22c678d1e#processor-architectures

hollowaykeanho commented 10 months ago

Great. I guess the packager will behave similarly to Red Hat packagers (rpm & flatpak) since no windows-based or macOS-based version is available (flow: windows X-compile to Linux & darwin X-compile to Linux). Closest packager I can find is: https://github.com/AppImage/AppImageKit/wiki/Creating-AppImages/cc2441518975caca23e9ce2dba6f08a22c678d1e#processor-architectures

Another problem would be sourcing the packager securely. At the moment, no shasum is available and it is not appearing in Homebrew. The easiest fix would be an installer polygot script like https://sh.rustup.rs I guess.

corygalyna commented 10 months ago

what about the Go version? https://github.com/probonopd/go-appimage/releases/tag/continuous

hollowaykeanho commented 10 months ago

Will need @probonopd to clarify. All the Go binaries are wrapped inside an AppImage while AppImage is a Linux only thing since it needs fuse library.

This led to my previous forum question since Go is the only one that can cross-compile statically for linux, darwin, and windows platforms with the same source codes without many hassles dealing with the native libraries. The choice of choosing Go as the language for a packager software is definitely correct but wrapping a compiled Go static binary inside an AppImage container seriously ham-struck its own packaging capability (in terms of facility coverage).

The good news is AppImage is still a good container for packaging multiple files product (e.g. e-books web server) but definitely not for those single-app ELF static binary.

corygalyna commented 10 months ago

Does that means will you support the ecosystem?

hollowaykeanho commented 10 months ago

Not yet - not until the supply chain is securely stable.

I can't simply script the packaging algorithm like how I did for deb, ipk, DOTNET (nupkg), and Chocolately (nupkg) either since AppImage's low-level is quite complicated.

probonopd commented 10 months ago

Let me start by pointing out that AppImage is a self-mounting filesystem image that executes whatever the author of that particular AppImage has put inside.

Technically an AppImage consists of a tiny ELF executable (the "runtime") and a compressed filesystem image (e.g., squashfs with zstandard compression). When one executes an AppImage, the runtime uses FUSE to mount the filesystem, runs the file ./AppRun inside it (which starts the payload application), and unmounts the filesystem again once the payload application is no longer running.

Traditionally, the runtime linked dynamically to libfuse2 to do this. But then, some (but not all) distributions started shipping libfuse3 instead of libfuse2, with no transition time shipping both. So, with the new static runtime, we don't link dynamically link libfuse anymore. However, the mounting still requires a functional FUSE setup, wich (at least) means that the kernel need to provide /dev/fuse and there must be a functional fusermount binary on the system. FUSE must be provided by the system, we cannot bundle FUSE inside the AppImage, because it is partly kernel-provided (e.g, /dev/fuse) and is partly setuid root (e.g., fusermount).

I never investigated what "a working FUSE setup" requires "under the hood", as this part is assumed to be managed by the operating system (and indeed, all major mainstream desktop distributions seem to be shipping "a working FUSE setup"). Does it need sysfs? I haven't tested so far.

As in, is it possible where AppImage will be shipped without any external dependencies (which means fusermount included) in some way? It will be bloated yes but it honors the "one file" paradigm properly.

A recent enough kernel and a working FUSE setup are system requirements for AppImage.

Also, regarding cross-compilations, can AppImage runs on non-amd64 linux system (e.g. aarch64) or is it a strictly amd64 architecture?

AppImages are platform-specific because the runtime is a platform-specific ELF. So you can make one AppImage for each architecture, but not one AppImage that runs on all architectures. (For that to work, the AppImage runtime would have to be an interpreted script rather than an ELF executable, which would require an interpreter to be available on the system, and would also have other downsides - been there, done that.)

Multiple appimage per platform (e.g. filename-amd64.appimage, filename-arm64.appimage)

Exactly.

what about the Go version? https://github.com/probonopd/go-appimage/releases/tag/continuous

That repository provides tools to make AppImages which are written in Go. The runtime is not written in Go, but is statically linked C.

AppImage is a Linux only thing since it needs fuse library.

Yes. AppImage is for Linux and Linux compatible systems with a working FUSE setup (such as Windows WSL2 and FreeBSD Linuxulator).

Wrapping a compiled Go static binary inside an AppImage container seriously ham-struck its own packaging capability (in terms of facility coverage).

True, Go applications often don't have the same binary packaging issues that applications written in other languages often have. So for simple Go applications AppImage might be overkill. But keep in mind that in addition to the binary itself, an AppImage can hold icons, descriptions, documentation, graphics, fonts, and other assets. So depending on the application, it may be beneficial even for applications written in Go to be shipped as AppImages. (One cold also put the binary and its assets into an archive like a .zip, but then one would have to extract it prior to being able to execute it; something that takes extra time and space especially if the files are large, such as in games.)

I can't simply script the packaging algorithm like how I did for deb, ipk, DOTNET (nupkg), and Chocolately (nupkg) either since AppImage's low-level is quite complicated.

Actually it's not that complicated, especially for statically linked Go executables. If you don't like to use existing tools but want to code it yourself, then here are the steps:

  1. Create the ./AppDir directory
  2. Put the statically linked Go executable into ./AppDir/AppRun
  3. Put in desktop files and icon files (I can elaborate on this if needed, or check out how other AppImages do it)
  4. Make a squashfs file with zstandard compression out of ./AppDir
  5. Append (concatenate) that squashfs file to the AppImage runtime of the respective architecture (get them from https://github.com/AppImage/type2-runtime/releases)

That's basically it. Happy to provide more details if needed.

hollowaykeanho commented 10 months ago

That repository provides tools to make AppImages which are written in Go. The runtime is not written in Go, but is statically linked C.

I see. That explains the notice to use the C version.

AppImage might be overkill. But keep in mind that in addition to the binary itself, an AppImage can hold icons, descriptions, documentation, graphics, fonts, and other assets.

This is what I can see from my side as well (as in, I see a web server type application). There is no doubt this is the value AppImage ecosystem.

We should both agree that's the end-user case (consumer) and only speaks for developer side - packager.

If you don't like to use existing tools but want to code it yourself, then here are the steps:

I think that's a misunderstanding. As specified, it is aligned to my finding where it is not possible to do so since the low-level parts (squashfs and runtime) are C ELF so it is similar to Red Hat rpmbuild or flatpak-builder. Hence, for Appimage, it's better to stick to Appimagetool binary executable.

The low-level scripting means something like understanding the engineering specifications at byte-level and then script the packager using only PowerShell + POSIX-type Shell. For Example: to develop the .deb packager, I have to go through deeply with its engineering spec (https://www.debian.org/doc/debian-policy/index.html) and then script the compiler from scratch (https://github.com/ChewKeanHo/AutomataCI/blob/main/automataCI/services/compilers/deb.sh for UNIX OS and https://github.com/ChewKeanHo/AutomataCI/blob/main/automataCI/services/compilers/deb.ps1 for Windows OS). This is also the same story with Chocolatey's nupkg (Windows OS) where the DOTNET nupkg engineering specs specify it's as a zip package containing a compulsory xml metadata.

If you notice, both MacOS and Windows can package both nupkg and .deb without any no issues locally in the event of cross-compilations even they're not the intended end-users.


I'm simply ask for tidying up your supply-chain releases at https://github.com/AppImage/AppImageKit/releases with something like a shell script similar to Rustup RS (quick and easy hack) or upstream to Homebrew (might takes time but 100% definitely recommended). Both methods emphasize on 1 thing:

  1. Securely checksum the appimagekit and systematically source the tools to a local build system.

Here's a case: this upcoming Rust patch's rustup installer is a copy of their rustup.rs. I'm only importing and review that script; not assembling their installers on-behalf into my own rustup.rs.

This is the last hurdle. Once resolved, I will develop the integration.

corygalyna commented 10 months ago

something like a shell script similar to Rustup RS (quick and easy hack)

Good idea: https://github.com/ChewKeanHo/AutomataCI/issues/141

probonopd commented 10 months ago

I'm simply ask for tidying up your supply-chain releases at https://github.com/AppImage/AppImageKit/releases with something like a shell script similar to Rustup RS (quick and easy hack) or upstream to Homebrew (might takes time but 100% definitely recommended). Both methods emphasize on 1 thing:

Actually, the version that will become the main one is at https://github.com/AppImage/appimagetool/releases.

I am not familiar with Rustup RS or what it does.

So you need a shell script that downloads the latest version of appimagetool and the runtime?

hollowaykeanho commented 10 months ago

So you need a shell script that downloads the latest version of linuxdeployqt and the runtime?

Nope. It's for downloading the toolkit from https://github.com/AppImage/appimagetool/releases. Will stick to your main documentation.

I am not familiar with Rustup RS or what it does.

The rustup.rs is actually a BASH script. A version of it is available here: https://github.com/ChewKeanHo/AutomataCI/blob/experimental/automataCI/services/compilers/rust-rustup.sh

You don't have to mess with curl's TLS like they did (personally I think they over complicated things with their evangelism). Most important tasks are curl/wget download and shasum (256/512) against the payload before use. That's why it has to be originated from the source because the shasum values are embedded in it.

If a 2-steps shell scripts are preferred. As in:

  1. A version-constant shell script download+shasum a version-specific shell script; AND
  2. The version-specific shell script do the download job

I can work on the version-constant ones. So you only work on the version-specific shell script.


I can share with you some POSIX shell codes that can speeds up the development:

  1. OS and ARCH runtime identification same like Go (Line 16 - 63) - https://github.com/ChewKeanHo/AutomataCI/blob/experimental/automataCI/services/io/init.sh#L16
  2. Shasum libraries (SHASUM::create_file()) - https://github.com/ChewKeanHo/AutomataCI/blob/experimental/automataCI/services/checksum/shasum.sh

Feel free to copy paste over and assemble yours. =)

Please remember to change the PROJECT_OS and PROJECT_ARCH to other naming conventions to avoid conflicts. TQ

probonopd commented 10 months ago

Yes, sorry, I meant appimagetool.

As far as I am concerned, the latest continuous build is the supported one. (Maybe you feel strongly that there should releases be made, but be aware that this is developed by a very small team and we don't have the manpower to support various branches and releases unless a volunteer wants to do it.)

So, something like this?

ARCH=x86_64 # aarch64, armhf, i686, x86_64
wget -c https://github.com/$(wget -q https://github.com/AppImage/appimagetool/releases/expanded_assets/continuous -O - | grep "appimagetool-${ARCH}.AppImage" | head -n 1 | cut -d '"' -f 2)
chmod +x appimagetool-*.AppImage
# TODO: Somehow get a checksum from the GitHub API and compare it to the checksum of the downloaded file
# Then do the same for the runtime from https://github.com/AppImage/type2-runtime
hollowaykeanho commented 10 months ago

Erms, this is what I meant (handcrafted by myself but can be automated) Lv2 installer script:

appimage-installer.txt

appimage-installer-UPDATED.txt

A few things to note:

  1. The head part where your release automation create by filling in the shasum values (I use sha256) and target filename based on what you support. Then append the rest like a static file (Indicated by: # AUTOMATION - APPEND THE FOLLOWING TO THE ABOVE).
  2. Make sure the shasum algorithm are easily updatable (e.g. 256 to 512) in case of quantum cryptanalysis. Since we're using shasum external program, it should be fine I guess.
  3. It should use a constant filename for every platforms (that's the point) so that automation tools can always source from a constant target and consistent path while not disrupting the suppliers' (as in your) workflow.
  4. Level 1 deals with these Level 2 installer scripts which is my part of work (should be invisible to you). Strictly speaking, your human customers can also run the Level 2 script manually and use it --> 2 birds, 1 arrow.
    1. Basically, everyone just need to secure your Lv2 installer script and be done with it instead of digging through the catalog, guessing here and there.
  5. Codes are POSIX compliant (dash capable) in case of future expansion to macOS or windows, if technically possible.
  6. The updated version basically adds OS to the shasum query function just in case the macOS (darwin) folks uses it.
  7. If can, add a version number onto the filename so that on Level 1 side, we can download by version point consistently.

Try it yourself locally (please do remember to update the shasum values before use).

probonopd commented 10 months ago

OK, now I understand better what you are looking for.

I see that there are hardcoded sha256 sums in the script. What I don't understand is how those are supposed to be updated:

Which one do you prefer? Are there better options? It looks like the GitHub API doesn't provide checksums for release assets (details).

hollowaykeanho commented 10 months ago

You could host this script, but then you would have to update the sha256 sums whenever a new version of linuxdeployqt comes out

This is layer 1 which is a manual audit method.

As I mentioned earlier, don't worry about this. You just have to publish the Layer 2 installer script and I will deal with Level 1 (I roughly have an idea how to do it autonomously; being tracked under Issue #141).

We could release this script together with the linuxdeployqt binaries, with the correct sha256 sums. But then the question is, wouldn't one have to check the checksum of the script itself also?

This is layer 2. Yes, please proceed to host it together. This is definitely the right direction because dealing with 1 installer script is WAY better dealing with a combinations of artifacts for a product line for both sides. It decouples both of our build processes.


To answer your question accurately but IMO a bit too much to ask: you can try certify the installer script itself (e.g. publish GPG armored public key + GPG armored detach signature certificate). I implemented this method by default in this facility where GPG cryptography math don't lies and it makes the entire process a breeze (e.g. automatically trust your installer script and its shasum values once it passed GPG verification) because:

  1. Public key rarely change (which I keep at layer 1 side).
  2. Installer script and signature certificate changed by version as per release.
    1. Can also verify the artifact is really originated from you as well - Much better than shasum.

FYI, you're asking a painful industrial problem when you expanding the subject to supporting OSes to Windows (confusing EV Certificates) and MacOS (proprietary Mac-only notarizer).

It looks like the GitHub API doesn't provide checksums for release assets

It's part of your internal build process actually; just shasum the artifact right after its successful compilation and dump the value to a file then at the end of the build cycle, generate the installer script header + append the rest.

hollowaykeanho commented 2 months ago

Marked for development. References:

  1. https://docs.appimage.org/packaging-guide/manual.html#manual-packaging
  2. https://docs.appimage.org/packaging-guide/from-source/linuxdeploy-user-guide.html