sailfishos-patches / patchmanager

Patchmanager for SailfishOS
https://openrepos.net/content/patchmanager/patchmanager
Other
21 stars 22 forks source link

[CI workflow] Update LATEST to 4.5.0.16 and more #417

Closed Olf0 closed 1 year ago

nephros commented 1 year ago

Fine.

However, are you aware of this? (I'm aware that this here is about the latest release, but it may be useful to make a note for older build versions.)

To make it clear, rpms created with the 4.5.0.16 toolchain can no longer be installed on e.g. a Jolla 1 phone, because the rpm version in SFOS 4.3 cannot handle zstd compression it seems.

https://forum.sailfishos.org/t/q-4-5-will-switch-rpm-compression-what-does-that-mean-for-existing-rpms/13153/17

Olf0 commented 1 year ago

However, are you aware of this?

Faintly; i.e., reading this I remember the fact, and logically that this will break something sooner or later, but I was not the exact consequences: With binaries built for which release.

Hence, thank you for bringing this up.

(I'm aware that this here is about the latest release, but it may be useful to make a note for older build versions.)

Nah, here I rather prefer a technical solution, as it seems to be easily feasible.

What are the binaries build via CI runs intended for?

To make it clear, rpms created with the 4.5.0.16 toolchain can no longer be installed on e.g. a Jolla 1 phone, because the rpm version in SFOS 4.3 cannot handle zstd compression it seems.

So, what do you suggest? Jolla phone users can use the build for "oldest supported" and the proper solution is IMO SailfishOS:Chum by providing builds for each release (as long as Jolla still maintains the SFOS-OBS), but adding another SFOS release to the "CI on tag creation" is fine. Which one out of {4.1.0,4.2.0,4.3.0,4.4.0}?

*: I considered to switch that to i486, because the workflows likely run on x86-64, so that may be faster, because it is "half way" a native architecture; still it is 32 bit vs. 64 bit and who knows which hardware GitHub really uses. I simply forgot to instrument my test workflow time and time again, to know for sure (still it is only a single sample, then). Do you know which shell command (sequence) is suitable to output the CPU-architecture and OS-architecture reliably? It is on Ubuntu 22.04.

Olf0 commented 1 year ago

To make it clear, rpms created with the 4.5.0.16 toolchain can no longer be installed on e.g. a Jolla 1 phone, because the rpm version in SFOS 4.3 cannot handle zstd compression it seems.

BTW, they cannot any way, due to other breaking changes.

But please see my research posted at FSO on the RPM compression topic and query for more checks, for which RPMs built by Coderus' docker images with the Sailfish-SDK (e.g., here at GitHub by CI workflows; Storeman provides a larger variety of builds) or at the SailfishOS-OBS (e.g., from the SailfishOS:Chum repository) should work well.

Ultimately one should insert for large RPMs (>> 100 kbyte), the option T (introduced with xz 5.2.0) only if the RPM cannot be installed on SailfishOS < 3.2.0: %define _binary_payload w2T.xzdio or %define _binary_payload w2.xzdio for RPMs which may be installed on SailfishOS < 3.2.0 xz -1 and gzip -9 are almost comparable in compression ratio (usually xz -1 is slightly better) and speed (usually gzip -9 is slightly faster), but gzip is roughly double as fast on decompression. Both gzip's and xz's decompression speed is almost independent of the compression parameter (they even becomes slightly faster with higher values, because less data has to be moved; in contrast to e.g., bzip2 which is very slow when decompressing and becomes even slower with higher values). gzip -6 usually provides almost the same compression ratio than gzip -9, but compresses much faster. xz -2 compresses slightly better at the expense of almost doubled compression time. For their SRPMs gzip -6 is more than sufficient “…, because SRPM’s contents is compressed already” (still most Linux distributions use gzip -9 for them): %define _source_payload w6.gzdio Sources: [1] (xz uses lzma; these benchmarks are carried out by the maker of xz / liblzma) and [2].

For small RPMs (< 100 kbytes) and their SRPMs even gzip -1 or -2 is generally fine (but not gzip -0, which means "no compression") and anything greater than gzip -6 is absolute overkill: %define _binary_payload w6.gzdio / %define _source_payload w2.gzdio

Side note: For really large RPMs (>> 1 MByte) a higher compression parameter up to xz -6 is desirable despite its slow compression speed.

nephros commented 1 year ago

and who knows which hardware GitHub really uses.

AFAIK that's not really relevant, because all SFOS SDK tooling is translated via qemu to the target arch, and that qemu is always amd64. So the build host will always have to be amd64 - and yes, i486 targets usually compile faster than the others (and I guess that's also because of qemu).

I have never inspected GH in detail but if it's anything like GitLab, then the layers are

[hardware host] -> ( [likely another layer like Amazon or Azure ] ) -> [ ci runner docker  ] -> [ coderus docker ] -> [ qemu-translated cross-compiling environment ]

so in the worst case there are going to be several arch conversions down to the next layer, only one of which we can control.

(BTW, that [ ci runner docker] -> [ coderus docker ] double setup is why you can't use any of the supplied caching on Gitlab, but it's not obvious.)

nephros commented 1 year ago

So, what do you suggest?

I 'm not sure, and I also must admit I haven't paid much attention to where which packages are built and which artefacts are released.

In general I would prefer if there were only one source of available package, not several, simply because of subtle differences in the build environments could lead to different installation or compatibility problems.

E.g. if the %vendor settings differ, users of pkcon will not be happy (I think zypper at least tells you you could do a vendor change).

Even within OBS, that will be different between one of the true Chum envs, a branch of one of those, and a "home" project, unless you take care of it especially.

Then we know that there's differences in the pre-installed packages between OBS, SDK and coderus-docker (that leads to incomplete .specs from users who only use the SDK). We don't have that problem, it's just another example of how the envs differ.

But these are just some thoughts, you are doing an admirable job in handling all the build and release workflows here end elsewhere.

Olf0 commented 1 year ago

and who knows which hardware GitHub really uses.

AFAIK that's not really relevant, because all SFOS SDK tooling is translated via qemu to the target arch, and that qemu is always amd64. So the build host will always have to be amd64

Sounds reasonable, but …

  • and yes, i486 targets usually compile faster than the others (and I guess that's also because of qemu).

… I have not observed this: Sometimes I watch the CI workflow's log-output "live" to spot things (reminds one of a Linux compilation on an real i486, well with linux v2.x.y back then; modern kernel sources would take ages to compile on a i486). But, as it is definitely not slower, I will switch from armv7hl to i486 for the CI runs on "pulls".

I have never inspected GH in detail but if it's anything like GitLab, then the layers are [hardware host] -> ( [likely another layer like Amazon or Azure ] ) -> [ ci runner docker ] -> [ coderus docker ] -> [ qemu-translated cross-compiling environment ]

At GitHub I do not know anything about the first to levels you denoted for Gitlab.com, because that is opaque. What one can choose from is a {Ubuntu,MacOS,Windows} in a VM (I actually prefer "on" a virtual machine). On Ubuntu (likely on MacOS and Widows, too) dockerd is already running (= pre-installed), though one can easily install anything via (Ubuntu-style) sudo apt-get install …. Coderus' Sailfish-SDK docker image is started with docker run …, which pulls it from Docker's default image repo at docker.io if not already pulled by a earlier docker pull|run call in that VM instance. And yes, within the docker "container" sb2 (Scratchbox 2) runs quemu in a chroot environment. So there is at least one opaque full virtualisation layer underneath the Ubuntu, plus the docker-based container in which Coderus' Sailfish-SDK image is started, in / on which the chroot-quemu is controlled by mb2 (the outside tool for controlling the "inside") and the sb2 inside of it.

so in the worst case there are going to be several arch conversions down to the next layer, only one of which we can control.

Rather 2½, WRT "control", but there is no way one can chose something different.

(BTW, that [ ci runner docker] -> [ coderus docker ] double setup is why you can't use any of the supplied caching on Gitlab, but it's not obvious.)

On GitHub it is that their cache action requires one to point to a user accessible directory or file(s) to cache. Currently dockerd runs in the root contect and downloads docker images to some non-user-accessible location on the filesystem: Running Docker "rootless" is the way out, but I am busy with oh so many detours. Still hope to return to that task some time.

P.S.: While I fully understand and share your political preference of Gittea and instances using it (like Codeberg), I completely fail to understand your preference for Gitlab: The have proven multiple times to be assholes (massive data collection and evaluation, specifically for the users of the "free tier", disabling features for the "free tier" and eliminating that code from the FLOSS-releases, e.g. "git mirroring" etc.; Gitlab is a nice, typical example of "core-ware", which does not make a large difference to closed source software, due to an evilly phrased CLA I would not sign in contrast to e.g. Jolla's etc.) to an extent GitHub / Microsoft cannot exhibit or they would be societally / politically dead (and their management seems to know that in contrast to Gitlab, where only sone engineers have understood this). What convinced me to primarily use GitHub is their rich and very well working functionality, compared to both Gitlab and Gittea.


So, what do you suggest?

I 'm not sure, and I also must admit I haven't paid much attention to where which packages are built and which artefacts are released.

In general I would prefer if there were only one source of available package, not several, simply because of subtle differences in the build environments could lead to different installation or compatibility problems.

Well, I agree to this assessment from a user's and a user's support team (us) perspective, but from a developer's perspective (also us) it is nice to have multiple build environments, because proper code should build or fail in the same manner, otherwise something is wrong. We have observed this in the past, builds "suddenly" failing when moving from 32- to 64-bit, to a different CPU-arch, from big to little endianness (MIPS, Power) etc.; or even worse,. building succeeds but the executables show different behaviour.

E.g. if the %vendor settings differ, users of pkcon will not be happy (I think zypper at least tells you you could do a vendor change).

Yes, all tools utilising libzypp exhibit "vendor stickiness", but I recently alleviated that for SailfishOS:Chum with Rinigus' help.

Even within OBS, that will be different between one of the true Chum envs, a branch of one of those, and a "home" project, unless you take care of it especially.

Not any longer: One can now set the vendor to anything, presumably even Vendor: for SailfishOS:Chum, overriding its default vendor chum (untested). It is now only the Jolla Store unconditionally setting the vendor, IIRC to "jolla" (unchecked). One can mimic that at all other locations (OpenRepos, SailfishOS-OBS, including SailfishOS:Chum community repository), if desired / desirable.

Then we know that there's differences in the pre-installed packages between OBS, SDK and coderus-docker (that leads to incomplete .specs from users who only use the SDK). We don't have that problem, it's just another example of how the envs differ.

Are you sure? Coderus' docker images are auto-generated from the Sailfish-SDK releases, and both download the dependencies of a software when it is build; I also have the impression that the SailfishOS-OBS with its DoD-repos (Download on Demand) yield very much the same.

But these are just some thoughts, you are doing an admirable job in handling all the build and release workflows here end elsewhere.

a. There is always room for improvement. b. I am learning a lot, partially also things I was not at all keen to study.

Olf0 commented 1 year ago

@nephros, now it has become a bit more than the original title indicates: If you want to re-review, I will gladly consider any new comments.