Open twaik opened 9 months ago
This reminds me of https://github.com/termux/termux-docker/issues/40...
Anyway, it is not necessary to do a (simulated) native build on libvte
. The pre-generated GIR xml will be copied to the source dir on device, see https://github.com/termux/termux-packages/blob/891cad02f0099d42f4023a40383c67bba15adeaa/packages/gobject-introspection/gi-cross-launcher-on-device.in#L29.
I removed using pre-generated GIR xml for purpose. It is first try to use Android native binaries on Ubuntu CI.
I am planning to remove all pre-generated xml in favor to native generating.
Also it is not simulation. bionic-host
package lets us execute binaries natively for x86_64 and i386 and semi natively (through qemu-user-static) for armhf and aarch64 binaries.
Purpose of bionic-host
package is not only running generators, it may be used to run make/meson/cmake test
command to check packages. It will let auto-updater check if package or its dependencies can be updated safely for other packages.
Wow. I did not know about termux-docker. It seems like I can update bionic-host to build all libraries present in termux-docker repo and move the workflow to termux-packages. We can make it being triggered every time bootstrap is being rebuilt.
Seems like it should be safe. I'll try this.
Can we do that in dockerized ubuntu (I mean if we have permissions to push kernel command)? Is it safe for other CI users (I mean will it be applied globally for CI runner host)? Does it comply with Github TOS?
IIUC GitHub will create a brand new VM for every runner, so I think it is safe and will not break Github TOS.
Also it is not simulation.
bionic-host
package lets us execute binaries natively for x86_64 and i386 and semi natively (through qemu-user-static) for armhf and aarch64 binaries.
This reminds me of pypy
. I used to use proot
and termux-docker
to do some cross translation
for pypy and pypy3.
Purpose of
bionic-host
package is not only running generators, it may be used to runmake/meson/cmake test
command to check packages. It will let auto-updater check if package or its dependencies can be updated safely for other packages.
Using QEMU and termux-docker
to run Android's ARM binaries is very experimental. I think we should do enough discussion and testing before switching to this method.
Actaully, I've done many work with termux-docker
and QEMU
. IIRC, on QEMU < 5, python
crashes (https://github.com/termux/termux-packages/blob/68a94d5d356ad2d309afea5ff876ce69d4076fa7/packages/pypy/build.sh#L84); on QEMU <= 6, clang++
hangs for some reason (https://github.com/termux/termux-docker/issues/28, https://github.com/termux/termux-packages/blob/68a94d5d356ad2d309afea5ff876ce69d4076fa7/packages/pypy/build.sh#L188); and on QEMU < 7.1, python2
(2.7.18) crashes with OpenSSL (https://github.com/termux/termux-packages/blob/68a94d5d356ad2d309afea5ff876ce69d4076fa7/packages/pypy/build.sh#L182).
Besides QEMU, Android's binary is not guaranteed to work properly under this FAKE Android environment, no matter what it is bionic-host
or termux-docker
I suppose. I used to use it to test python-numpy
. There will be many false negative results if you use it to do some tests. Besides, I used to use termux-docker
(under a real ARM server without QEMU) to compile r-base
(https://github.com/termux-user-repository/tur/tree/r-base-on-device) because it doesn't support cross compile. The binary crashes and I still don't figure out why till now.
Latest qemu-user-static seems to be pretty stable.
It seems like we should wait for a while to get aarch64 binaries working natively, without qemu. https://github.com/github/roadmap/issues/836
It seems like we should wait for a while to get aarch64 binaries working natively, without qemu. github/roadmap#836
Not possible to directly use this runner in this repository, because Android NDK toolchain doesn't have an official build for aarch64 GNU/Linux. Lzhiyong maintains a toolchain for Termux aarch64, so I think it is possible to build such a toolchain for aarch64 GNU/Linux.
If GitHub Action provides such a runner, it is also possible to launch an arm-based Android emulator (avd or cuttlefish) or Android container using lxc (waydroid) / using docker (redroid), and build these binaries using a REAL Android environment natively. I used to work on this in tur-avd but I haven't worked on this in a while because my self-hosted arm runner expired.
We can try to build NDK natively for aarch64 and use it. Or use it with qemu. Or we may try to use clang built with termux.
We can try to build NDK natively for aarch64 and use it. Or use it with qemu. Or we may try to use clang built with termux.
Running clang aarch64/arm with qemu is not acceptable to me. It takes almost six times longer than running them natively, see https://github.com/termux-user-repository/tur/actions/runs/6876043598.
What about termux's clang?
What about termux's clang?
Emmm... I maintained a repo that uses termux-docker to build packages in termux-user-repository/tur. I used to use native aarch64 server to build packages, but my server has been expired so I switched to QEMU a while ago. Just like what I showed before, when building python-tokenizers
with qemu and clang, it took 1h, but if you run it natively, it will take about 10min.
That means we can use termux's native clang, right?
That means we can use termux's native clang, right?
Of course, if GitHub provides an aarch64 runner. But some packages like r-base
doesn't work in fake environment for some reason...
Not possible to directly use this runner in this repository, because Android NDK toolchain doesn't have an official build for aarch64 GNU/Linux.
I don't know if github will make available those ARM CI runners for free to OSS projects, but if they do, we could always build Termux packages on Ubuntu x86_64 as always, then copy them over to the ARM runners and test them there in a separate CI job. I do this on my Swift CI, because the Android emulator only has hardware acceleration on the github macOS runner, so it runs much faster on there.
Or we can order some ARM servers and use them as self-hosted runners. If it will be cheaper, of course.
Does docker on macos use qemu for native aarch64 containers? If not we may try to use macos runners for this, without performance hit.
The architecture of GitHub-hosted MacOS runner is x86... They may add aarch64 runner next year, see https://github.com/github/roadmap/issues/819.
Hetzner provides arm-based servers. Can we use them? Prices are pretty fine.
Actually we have a bit better solution. We can use glibc packages instead of hostbuild. I mean we may build glibc packages with exporting files needed by bionic packages to some subpackage and after that we may fetch these subpackages during normal build. @sylirre @truboxl @licy183 @finagolfin What do you think about that? So glibc packages will exist for two purposes:
I mean setting up docker when we are updating setup-ubuntu script.
I think we do not need to use aarch64/armhf binaries to check updates. But we can modify updater script to build x86_64 packages instead of aarch64 and run tests. We even can check package compatibility by running tests of reverse dependencies (of course if we pack them (tests) as subpackages). In this case it will be even more effective than regular automatic updates.
Hi. There is a problem with 32-bit (i686 and armhf) bionic binaries. They fail with
Is it safe to add
to docker image startup script,
package_updates
andpackages
workflows?Can we do that in dockerized ubuntu (I mean if we have permissions to push kernel command)? Is it safe for other CI users (I mean will it be applied globally for CI runner host)? Does it comply with Github TOS?