fsquillace / junest

The lightweight Arch Linux based distro that runs, without root privileges, on top of any other Linux distro.
GNU General Public License v3.0
2.08k stars 110 forks source link

Update junest-x86_64.tar.gz inbuilt packages #342

Closed ivan-hc closed 8 months ago

ivan-hc commented 8 months ago

Hi, happy new year @fsquillace , I hope you're well

I use JuNest in a lot of my projects to bundle AppImages in PROOT mode (see https://github.com/ivan-hc/ArchImage) so thanks to you it is easier to bundle packages in portable mode to give the more recent software to the masses. VLC, GIMP, OBS Studio, MPV and much more were dificult to bundle and none wanted to create Appimages for that, but now all this is possible, thanks to you!

All this work costs time in build and rebuild packages during tests before they are published, and I've seen that the first time we use the command sudo pacman -Syy and sudo pacman -Syu there are a lot of packages to update (now are 69) because junest-x86_64.tar.gz contains old packages.

Nothing bad with that, I can bundle packages in 2-3 minutes from repositories and 15-20 minutes from AUR, but updating the junest-x86_64.tar.gz archive may save more time when I test a new app to upload.

Can you do that?

PS: I've seen you were using Travis for that but I don't know the status of it, I use Github Actions for my workflows.

ivan-hc commented 8 months ago

I've done by myself, let me know if you're interested to a pull request:

I've done this by experiencing on scripting for my ArchImages.

Again, thank you for all this @fsquillace

fsquillace commented 7 months ago

Hey @ivan-hc , happy new year and thanks for the kind words :) I am glad to see that JuNest is valuable for others.

About Github Actions, it was something i wanted to try but I guess it will take some time to migrate from travis to GA. I do not know if it supports docker either. For building Junest image it is required to run the build script from a ArchLinux machine, this is achievable through Travis thanks to the docker support. If GA supports that too I think it would be feasible to move to it.

Thanks! Filippo

fsquillace commented 7 months ago

BTW, I have just built a new version of JuNest now. It should have all packages updated.

ivan-hc commented 7 months ago

In the previous months I had to fix the signature check level to made pacman working because I had an error message about old key, or something:

sed -i 's/#SigLevel/SigLevel/g' ./.junest/etc/pacman.conf
sed -i 's/Required DatabaseOptional/Never/g' ./.junest/etc/pacman.conf

however, I started build Appimages using the archive I've uploaded in my fork and it works great, without the needing of docker.

That said, I don't think you strictly need docker for this, take a look to my workflow's JuNest-updater.yml file.

Change this script by using the direct link to your archive and use my workflow above pointing to your script, I'll show you that its easier than you would expect! ;)

I am glad to see that JuNest is valuable for others.

You did something GREAT! I share your repository in all my projects that use it. Take a look here:

all this is thank to you!

And I have also a greath news! Do you remember my old issue about portability? Previously I had to use proot to made them portable. This is the script I use in my workflows to patch proot:

# REMOVE "READ-ONLY FILE SYSTEM" ERRORS
sed -i 's#${JUNEST_HOME}/usr/bin/junest_wrapper#${HOME}/.cache/junest_wrapper.old#g' ./.local/share/junest/lib/core/wrappers.sh
sed -i 's/rm -f "${JUNEST_HOME}${bin_path}_wrappers/#rm -f "${JUNEST_HOME}${bin_path}_wrappers/g' ./.local/share/junest/lib/core/wrappers.sh
sed -i 's/ln/#ln/g' ./.local/share/junest/lib/core/wrappers.sh

Well... now it is no more so. I left the script above, but I added also this line:

sed -i 's#--bind "$HOME" "$HOME"#--bind /opt /opt --bind /usr/lib/locale /usr/lib/locale --bind /etc /etc --bind /usr/share/fonts /usr/share/fonts --bind /usr/share/themes /usr/share/themes --bind /mnt /mnt --bind /media /media --bind /home /home --bind /run/user /run/user#g' .local/share/junest/lib/core/namespace.sh

Now I can build the AppImages everywhere and for everyone with JuNest in "normal mode"!

Now that I have no more the needing of proot I'm trying to find a way to let bwrap use the drivers from the host... and here come the difficult.

I'm working on other apps to export "around the world", for example Bottles, and with this I have many troubles, because I need a way to made the apps work with any host's GPU. You can see the script I use to build it here:

https://github.com/ivan-hc/Bottles-appimage/blob/main/bottles-junest.sh

I have an old Nvidia, so I can't use hardware accelleration in 64bit games with the newer drivers (I use Debian, NVIDIA GeForce GT 710/PCIe/SSE2, OpenGL ES 3.2 NVIDIA 470.223.02).

I know that you have an Intel, I've seen you said that into another issue. Maybe this would work for you, I don't know:

https://github.com/ivan-hc/Bottles-appimage/releases

I contacted the main developer of Bottles, that is italian like me, and I'm trying to find a solution for this.

If you can implement this it would be great!

I am looking around about other projects that alreadry have tried this, for example Conty, the use this function to detect Nvidia drivers on the host, you may be interested.

On the other hand, I have tried with --bind to extend $JUNEST_HOME to another directory with the same structure, while launching Bottles, for example:

if ! [ -d $HOME/.local/share/bottles/junest ]; then
    mkdir -p $HOME/.local/share/bottles/junest
    rsync -av -f"+ */" -f"- *" "$HERE/.junest/" "$HOME/.local/share/bottles/junest/"
fi

and then I have tried to mount directories trying to install something with Pacman still alive into the archive... but without success. I want to made the archive smaller, now it is 900 MB, extracted is about 4 GB, the bigger directories are /usr/lib/wine (about 800 MB) and /usr/lib32 (about 900 MB) and also /usr/lib/dri (400 MB). Maybe I can compress them and made the internal script of the AppImage extract them into the parallel and alternative $JUNEST_HOME directory.

I have a lot of work to do on this, but if you can made JuNest use the hardware's accelleration, I can remove all this and made the app smaller.

It's a mess.

Sorry for the many words, but I'm excited about all this. You have no idea what your project has managed to make me do.

EDIT: I've tried tu run Supertuxkart for Windows 64-bits in Bottles using nouveau drivers on the host and works! I don't know if it is possible to use nouveau drivers in JuNest guest over Nvidia proprietary drivers installed on the host.

fsquillace commented 7 months ago

That said, I don't think you strictly need docker for this, take a look to my workflow's JuNest-updater.yml file.

The junest build command needs to run on a ArchLinux machine. I do see you are using ubuntu in line 13, they do not provide archlinux images, therefore it is needed a virtualization layer such as docker to work.

I have a lot of work to do on this, but if you can made JuNest use the hardware's accelleration, I can remove all this and made the app smaller.

I guess that's tricky and not necessarily it depends on JuNest itself. Drivers are only needed for the host linux kernel which is outside JuNest. If drivers are not installed by the host, JuNest cannot do much for it.

ivan-hc commented 7 months ago

The junest build command needs to run on a ArchLinux machine. I do see you are using ubuntu in line 13, they do not provide archlinux images, therefore it is needed a virtualization layer such as docker to work.

The workflow downloads the archive of the JuNest's image on Ubuntu, installs it and re-uploads it. It works for me, I've tested it in JuNest normally, also without the needing of use it into an Appimage.

I guess that's tricky and not necessarily it depends on JuNest itself. Drivers are only needed for the host linux kernel which is outside JuNest. If drivers are not installed by the host, JuNest cannot do much for it.

I started experimenting some additional environment variables to check for the host's libraries, take a look here:

https://github.com/ivan-hc/Bottles-appimage/blob/main/AppRun

It is not much, but by commenting/uncommenting variables I get different outputs. My tests are just started.

ivan-hc commented 7 months ago

Hi @fsquillace , talking about the workflow run I use, here are the steps I've just done:

  1. in this commit I've changed my script to made it point to the main archive of JuNest you have uploaded, https://github.com/ivan-hc/junest/commit/13be0216119cd1969e804a2e49e1f3f5063c20f1 , no special patches. The script you can test is this:
    
    #!/bin/sh

SET APPDIR AS A TEMPORARY $HOME DIRECTORY, THIS WILL DO ALL WORK INTO THE APPDIR

HOME="$(dirname "$(readlink -f $0)")"

DOWNLOAD AND INSTALL JUNEST (DON'T TOUCH THIS)

git clone https://github.com/fsquillace/junest.git ~/.local/share/junest ./.local/share/junest/bin/junest setup

UPDATE ARCH LINUX IN JUNEST

./.local/share/junest/bin/junest -- sudo pacman -Syy ./.local/share/junest/bin/junest -- sudo pacman --noconfirm -Syu echo yes | ./.local/share/junest/bin/junest -- sudo pacman -Scc

2. now watch how my workflow run has worked https://github.com/ivan-hc/junest/actions/runs/7598762951
3. I restored my script to made it point again on my new release in this commit https://github.com/ivan-hc/junest/commit/57013f4dcf8ef8734d31e5f64d5c614bd357c535 then I started a new workflow run that points to my release and worked too https://github.com/ivan-hc/junest/actions/runs/7598772415

CONCLUSIONS: you don't need docker to update the archive of JuNest. Believe me.

All you need is a SHELL script like the one I wrote and a workflow YAML file like the one below:

https://github.com/ivan-hc/junest/blob/master/.github/workflows/JuNest-updater.yml

Open a new repository to test all this and tell me what you think about this.

I'm still creating ArchImages (JuNest-based Appimages) using this stuff.

If all this works for you, add a workflow file in /.github/workflow like [this](https://github.com/ivan-hc/junest/blob/master/.github/workflows/cronjob.yml) and go traveling around the world for at least one year, you should not care about new updates anymore ;)

You will thank me

EDIT: just one thing, /etc/pacman.d/mirrorlist points to "mirror.rackspace.com", I would add something like this in the script

rm -R ./.junest/etc/pacman.d/mirrorlist wget -q https://archlinux.org/mirrorlist/all/ -O - | awk NR==2 RS= | sed 's/#Server/Server/g' >> ./.junest/etc/pacman.d/mirrorlist

so by default the user can handle "worldwide" mirrors out of the box

I say this because until now I've always used the default mirror in my workflows and I got this error message more often:

error: failed retrieving file '{PACKAGENAME}.pkg.tar.zst' from mirror.rackspace.com : The requested URL returned error: 404 warning: failed to retrieve some files ...


where {PACKAGENAME} is a dependence that was not found and "some files" are not "some" but "a lot" ot "too many" xD
fsquillace commented 7 months ago

Thanks @ivan-hc for all the info. Very helpful :)

Could you try running: ./.local/share/junest/bin/junest build -n? I would expect it will not work unless GitHub Actions is running on an ArchLinux machine.

ivan-hc commented 7 months ago

Could you try running: ./.local/share/junest/bin/junest build -n? I would expect it will not work unless GitHub Actions is running on an ArchLinux machine.

What means?

PS: I use Debian as a host, and all my AppImages based on JuNest work like a charm.

Also, my JuNest build are daily, and now are "virgin" (based on your build).

ivan-hc commented 7 months ago

@fsquillace

EDIT: the meaning of this work is to recycle the already existing build by downloading it, installing it, updating it and repackaging it and then reloading it, and this every day.

That said, I'm not recreating a build from scratch on an Arch Linux base like you normally do, I'm taking the one you built and upgrading it. That's all.

The command ./.local/share/junest/bin/junest build -n has nothing to do with what we are doing here.

ivan-hc commented 7 months ago

@fsquillace to be more clear, I have a repository I use as a workbench before I publish a real project, this

I want you understand what my method do:

  1. This is the sript that installs JuNest on Github actions, it installs JuNest normally, like anyone, but on Github Actions, so it uses the stock archive YOU have uploaded
    
    #!/bin/sh

SET APPDIR AS A TEMPORARY $HOME DIRECTORY, THIS WILL DO ALL WORK INTO THE APPDIR

HOME="$(dirname "$(readlink -f $0)")"

DOWNLOAD AND INSTALL JUNEST (DON'T TOUCH THIS)

git clone https://github.com/fsquillace/junest.git ~/.local/share/junest ./.local/share/junest/bin/junest setup

UPDATE ARCH LINUX IN JUNEST

./.local/share/junest/bin/junest -- sudo pacman -Syy ./.local/share/junest/bin/junest -- sudo pacman --noconfirm -Syu echo yes | ./.local/share/junest/bin/junest -- sudo pacman -Scc

2. This is the workflow file (I've named .github/workflows/junest-update-stock-release.yml) that must be used only the first time (set on "workflow_dispatch"), this will repack the directory ~/.junest after the script has finished, then it will reupload the archive in "releases" and in "Continuous build" with tag "continuous"... this will be the first test

name: JuNest Builder concurrency: group: build-${{ github.ref }} cancel-in-progress: true

on: workflow_dispatch:

jobs: build: runs-on: ubuntu-latest steps:

This is what the workflow have done

https://github.com/ivan-hc/My-workbench/actions/runs/7603285873/job/20704842034

and this is what has been uploaded

https://github.com/ivan-hc/My-workbench/releases/tag/continuous

as you can see the archive is 191 MB, try to use JuNest to install it and tell me if you notice a difference with your build.

And again, in my fork I update the build every day and I use it into AppImages... all those I've listed into the previous comment, here https://github.com/fsquillace/junest/issues/342?notification_referrer_id=NT_kwDOBUnTgbM4OTU0NTc3NjU2Ojg4NzI0MzUz#issuecomment-1883440954

If all this works for you, you can use the script and the workflow that I've uploaded on my fork at https://github.com/ivan-hc/junest, i.e.

https://github.com/ivan-hc/junest/blob/master/.github/workflows/JuNest-updater.yml

https://github.com/ivan-hc/junest/blob/master/junest-updater.sh

by changing https://github.com/ivan-hc with https://github.com/fsquillace and you're ready for a coffee here in Italy xD