7Ji-PKGBUILDs / .meta

1 stars 0 forks source link

firefox-mpp #54

Closed hbiyik closed 5 days ago

hbiyik commented 2 weeks ago

https://github.com/hbiyik/agrrepo/tree/master/mpp/firefox-aarch64-mpp

If there is interest here is a cross compiling package fromx86_64 to aarch64 with some unofficial hacks in PKGBUILD which allows me to compile ff in about 5 minutes.

In case there is interest i can make the package native aarch64 so that they can be distributed here as binaries.

PS: it needs env variable ~MOZ_DISABLE_RDD_SANDBOX=1~ (edit: not anymore)

7Ji commented 2 weeks ago

Thanks for the work. It would be nice if the native PKGBUILD is available here (as always this org as just a hoster for PKGBUILDs, not their built binaries)

I'm open to add it to the build list of 7Ji/archrepo . My current build server is running on a 48-core 96-GiB VM on a HiSilicon Kunpeng-920 server with 256G vDisk from my 1TiB PCI-E 4.0 x4 drive so the power is enough to hold more heavy packages.

But the current "cross" build though, does it really work? Mostly does it even link without library path issues? I never had hope in using makepkg in a cross way because it would assume too many things naively with the host arch. And when it really works with current dependency tree from a different arch you'd essentially already running a full QEMU environment.

At least my try failed with it on my x86_64 machine:

 0:04.25 DEBUG: <truncated - see config.log for full output>
 0:04.25 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_1_1.o) is incompatible with elf64-x86-64
 0:04.25 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_2_1.o) is incompatible with elf64-x86-64
 0:04.25 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_4_1.o) is incompatible with elf64-x86-64
 0:04.25 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_8_1.o) is incompatible with elf64-x86-64
 0:04.25 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_16_1.o) is incompatible with elf64-x86-64
 0:04.25 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_1_2.o) is incompatible with elf64-x86-64
 0:04.26 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_2_2.o) is incompatible with elf64-x86-64
 0:04.26 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_4_2.o) is incompatible with elf64-x86-64
 0:04.26 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_8_2.o) is incompatible with elf64-x86-64
 0:04.26 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_16_2.o) is incompatible with elf64-x86-64
 0:04.26 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_1_3.o) is incompatible with elf64-x86-64
 0:04.26 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_2_3.o) is incompatible with elf64-x86-64
 0:04.26 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_4_3.o) is incompatible with elf64-x86-64
 0:04.26 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_8_3.o) is incompatible with elf64-x86-64
 0:04.26 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_16_3.o) is incompatible with elf64-x86-64
 0:04.26 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_1_4.o) is incompatible with elf64-x86-64
 0:04.26 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_2_4.o) is incompatible with elf64-x86-64
 0:04.26 DEBUG: | ld.lld: error: /usr/bin/../lib64/gcc/aarch64-linux-gnu/14.1.0/libgcc.a(cas_4_4.o) is incompatible with elf64-x86-64
 0:04.26 DEBUG: | ld.lld: error: too many errors emitted, stopping now (use --error-limit=0 to see all errors)
 0:04.26 DEBUG: | clang: error: linker command failed with exit code 1 (use -v to see invocation)
 0:04.26 ERROR: Couldn't find one that works
*** Fix above errors and then restart with "./mach build"
==> ERROR: A failure occurred in build().
    Aborting...

All those being said, I wouldn't use the new firefox build actively though. I'm using OrangePi 5 Plus exclusively as my office desktop, and as its power is not enough to carry both a lot of terminals and a browser, I just removed browser from it and run firefox through waypipe from my 5600G PC, which is way way more smooth than running it on 5600G itself, even able to output both a full 1080p144Hz or 4K60Hz without framedrop

hbiyik commented 2 weeks ago

Firefox is mostly rust + clang build, therefore there is no much issue in terms of cross compiling. Somehow the deps of it is matching with the archlinux. Also i am building with bootstrap=enable and i think thats the exact purpose of bootstraping (dont exactly know the details though)

But you are right, the quality of that package is limited with "it works on my machine" level.

I remember i had the errors you sent first, and they were fixed when i reset the LDFLAGS,CFLGAFS,CXXFLAGS, it seems like makepkg is inject -m64 time to time, most likely from makepkg.conf.

Then i would suggest let this ticket stay open little bit, because, adding the package is somehow easy but maintaining it is becoming a pain. If there is no enough interest there is no need to deal with it.

I would particularly appreciate if someone also takes over the package maintaining responsibility, i can do the patches maintenance but together with package maintenance is becoming a burden.

JFLim1 commented 1 week ago

From a user perspective it will be good to have another browser with mpp support.

Thanks for the good works both you @7Ji and @hbiyik are doing.

hbiyik commented 1 week ago

also added mpp service to security sandbox exceptions, so there is no need for the env value MOZ_DISABLE_RDD_SANDBOX, it just works out of the box.

Let me see, if i can make a maitenance free package (ie: bumping the version continously) i might take the responsblity and make the package native :)

hbiyik commented 1 week ago

@7Ji i added some inheritance flavour to bash scripts :)

what do you think about such approach https://github.com/hbiyik/agrrepo/blob/master/mpp/firefox-mpp/PKGBUILD

those first part of the script can be an external library and reused for other packages.

With that original PKGBUILD can be kept original, and the maitenance of it can be reused, and also synchronism with the actual alarm versioning can bu guaranteed.

With some few lines a new package based on an existing package can be created. in Firefox case which is rather complicated package it takes up 10~15 lines of code...

7Ji commented 1 week ago

No please no. A couple of reasons:

  1. To eval from Internet is really asking for trouble, it is at the same level of evilness as curl/wget | sh. There's many reasons against this and I can write a whole page of cyber-security explanation about this. Please don't do this.
  2. You only included the PKGBUILD but not other files. This could work currently, but what if one day upstream changes this?
  3. This makes the PKGBUILD dynamic and non-reproducible, I'm mostly against a PKGBUILD being dynamic: a PKGBUILD should persistenly yield the same building reciple and metadata against the same source.
  4. This makes parsing PKGBUILD slow and network-sensitive. The later is also why I insist on a persistent PKGBUILD. Every time an external library tries to parse to understand the PKGBUILD that curl needs to be executed and it strictly needs network.

I had also written earlier on aur-general maillist when a bad linux-bin PKGBUILD that uses curl and uname everythere was removed and its maintained cane to the list to complain. My points there apply also for this:

https://lists.archlinux.org/archives/list/aur-general@lists.archlinux.org/message/WDOENADS6QOFTHRXAN4LVWDFQTO43LGG/

If you really want to do "inclusions", then at least version-track the original PKGBUILD. Have a local script to fetch it as PKGBUILD.in or something then in your PKGBUILD sourcing it.

hbiyik commented 1 week ago

i think you didn't grab the exact idea, may be you just saw remote sourcing and didn't look further.

I do not want to start an argument battle but just want to state what the initial idea was.

1. To eval from Internet is really asking for trouble

If it was just to eval in the middle of PKGBUILD then yeah i would agree But that would be not only because a code snippet is evil, because there would be no proper way of auditing the remote source changes, and also the security level of the remote zone.

The suggested here is to have another library (first part of the code) which actually does the remote sourcing and the PKGBUILD would then source the local, audited single library ie (/usr/lib/alarm_inherit.sh) and call the audited function with the remote repo name ie extra/firefox with literals. Therefore it is ensured that only PKGBUILDs from alarm repo is ensured, and auditing the PKGBUILDs can be achievable.

I just gave the example in 1 single PKGBUILD with inline comments since it is a POC.

2. You only included the PKGBUILD but not other files. This could work currently, but what if one day upstream changes this?

the library handles that https://github.com/hbiyik/agrrepo/blob/f8b9d736a6215abee008022370ea69df91252f5e/mpp/firefox-mpp/PKGBUILD#L15-L31

3. This makes the PKGBUILD dynamic and non-reproducible, I'm mostly against a PKGBUILD being dynamic: a PKGBUILD should persistenly yield the same building reciple and metadata against the same source.

This can be achived by explicitly setting a variable with the hash of the PKGBUILD. As long as the inherited PKGBUILD is reproducable, the inheriting package will be reproducable, since it will contain the hash of the sourced data/code.

4. This makes parsing PKGBUILD slow and network-sensitive. The later is also why I insist on a persistent PKGBUILD. Every time an external library tries to parse to understand the PKGBUILD that curl needs to be executed and it strictly needs network.

Yes thats the trade off between continously updatign the source with trivial reudundant changes, or add extra 10ms to PKGBUILD exectuion

I had also written earlier on aur-general maillist when a bad linux-bin PKGBUILD that uses curl and uname everythere was removed and its maintained cane to the list to complain. My points there apply also for this:

https://lists.archlinux.org/archives/list/aur-general@lists.archlinux.org/message/WDOENADS6QOFTHRXAN4LVWDFQTO43LGG/

If you really want to do "inclusions", then at least version-track the original PKGBUILD. Have a local script to fetch it as PKGBUILD.in or something then in your PKGBUILD sourcing it.

Anyway thanks for the feedback.

7Ji commented 1 week ago

may be you just saw remote sourcing and didn't look further.

I re-read the code before sleeping and didn't go back to change my statement 2, but I should've editted the original. The point would be explaned more thoroughly in this reply.

If it was just to eval in the middle of PKGBUILD then yeah i would agree But that would be not only because a code snippet is evil, because there would be no proper way of auditing the remote source changes, and also the security level of the remote zone.

The suggested here is to have another library (first part of the code) which actually does the remote sourcing and the PKGBUILD would then source the local, audited single library ie (/usr/lib/alarm_inherit.sh) and call the audited function with the remote repo name ie extra/firefox with literals. Therefore it is ensured that only PKGBUILDs from alarm repo is ensured, and auditing the PKGBUILDs can be achievable.

The idea is similar to what chaotic-aur does with their Bash-based build system, but they don't try to download a single PKGBUILD file then try to calculate which other files to pull dynamically. Instead they clone the whole repo and then append their override to PKGBUILD. You can't just pull a PKGBUILD and parse its sources to download its local sources and consider it's already local. A PKGBUILD is always related to its accompanying files and that's why Arch official maintain their PKGBUILDs as whole git repos and (repo get) from Arch build tools do exactly (git clone)

There're other things you didn't pull: source_arch, install. And for source_arch how do you get CARCH which is in makepkg.conf? It's not always the same as $(uname -m) not because users might modify it but also it's could even be modified by the routine of that architecture, like on Loong Arch Linux it would be rewritten to loong64 while (uname -m) is loongarch64.

If you want to iterate through all sources for generic arch and current arch then use libmakepkg function.

And there's even more gotchas: even as I said one should not write PKGBUILD as a plain Bash script in mind, many does, and those that do may just use "source/." to include their own snippet, and the paths of those snippets wouldn't be possible to get by simply sourcing the PKGBUILD, as it already breaks by failing to source its snippets.

This can be achived by explicitly setting a variable with the hash of the PKGBUILD. As long as the inherited PKGBUILD is reproducable, the inheriting package will be reproducable, since it will contain the hash of the sourced data/code.

Due to those additional files you may fail to fill I mentioned above, at that stage you may just track the upstream git commit (and subtree) instead of file hash. But then this does exactly what chaotic-aur does with their build system.

Yes thats the trade off between continously updatign the source with trivial reudundant changes, or add extra 10ms to PKGBUILD exectuion

The point is not against it being updated in different runs, it's against it being updated in the same run. A PKGBUILD would be included multiple times if user splits the makepkg calling into multiple commands. Like when first --nobuild to prepare sources and then --noextract multiple times to re-build on the same extracted source. If a PKGBUILD changes under the hood then what is later --noextract builds build against? It's at best old source + new logic but sometimes that's broken logic as build() could reference metadatas and some bad PKGBUILDs do source extraction in build()

hbiyik commented 5 days ago

thanks for your feedback, i have burned up my rock5b, therefore it wont matter much for me anyways.

I will have to stop maintaining the rest of the packages since i will not have a device to work on anymore. May be it is a good idea to call for a new maintainer for those packages, if not may be set them to archive if they dont work anymore..

Quite annoyed that i burned the whole god damn thing, but yeah it is what it is. I had our fun now its time to put an end to it for me.

7Ji commented 5 days ago

Sorry to hear about that.

Thanks for your contribution. I'd always be welcoming if you decide to go back to pick these up.

Closing then.