fsquillace / junest

The lightweight Arch Linux based distro that runs, without root privileges, on top of any other Linux distro.
GNU General Public License v3.0
2.08k stars 110 forks source link

Trying to import Hardware Accelleration / OpenGL from host to guest using environment variables to bind #344

Closed ivan-hc closed 5 months ago

ivan-hc commented 7 months ago

Hi @fsquillace , as we have already talked elsewhere, I'm creating JuNest-based ppImages in my project I've named "ArchImage".

As you have said at https://github.com/fsquillace/junest/issues/342 :

I guess that's tricky and not necessarily it depends on JuNest itself. Drivers are only needed for the host linux kernel which is outside JuNest. If drivers are not installed by the host, JuNest cannot do much for it.

And as @ayaka14732 have suggested at https://github.com/fsquillace/junest/issues/209 its necessary to build the same Nvidia drivers on the guest.

It almost seems like there is nothing that can be done, but solutions arise out of nowhere. For now I have managed to mount some components of the host system on the guest using various environment variables in my tests.

I plan to share them here to start a discussion about it.

I would also like to mention other contributors to this project who certainly know more than me, inviting them to participate in this research together. @cosmojg @cfriesicke @escape0707 @schance995 @neiser @hodapp512 @soraxas I would like to share what I'm working on.

They are just some functions I'm working on to made my Bottles-appimage work... for now without great progresses other than the detection of some libraries on my host system... all of them are listed at https://github.com/ivan-hc/Bottles-appimage/blob/main/AppRun

NOTE, my host system is Debian, so this may vary depending on your system.

Detect if the host runs an AMD / Intel /Nvidia driver to check the "Vendor"

# FIND THE VENDOR
VENDOR=$(glxinfo -B | grep "OpenGL vendor")
if [[ $VENDOR == *"Intel"* ]]; then
    export VK_ICD_FILENAMES="/usr/share/vulkan/icd.d/intel_icd.i686.json:/usr/share/vulkan/icd.d/intel_icd.x86_64.json"
    VENDORLIB="intel"
    export MESA_LOADER_DRIVER_OVERRIDE=$VENDORLIB
elif [[ $VENDOR == *"NVIDIA"* ]]; then
        NVIDIAJSON=$(find /usr/share -name "*nvidia*json" | sed 's/ /:/g')
    export VK_ICD_FILENAMES=$NVIDIAJSON
    VENDORLIB="nvidia"
    export MESA_LOADER_DRIVER_OVERRIDE=$VENDORLIB
elif [[ $VENDOR == *"Radeon"* ]]; then
    export VK_ICD_FILENAMES="/usr/share/vulkan/icd.d/radeon_icd.i686.json:/usr/share/vulkan/icd.d/radeon_icd.x86_64.json"
    VENDORLIB="radeon"
    export MESA_LOADER_DRIVER_OVERRIDE=$VENDORLIB
fi

Find libraries on the host

DRIPATH=$(find /usr/lib -name dri)
VDPAUPATH=$(find /usr/lib -maxdepth 2 -name vdpau)
export LIBVA_DRIVERS_PATH=$DRIPATH
export GLPATH=/lib:/lib64:/lib/x86_64-linux-gnu:/usr/lib
export VULKAN_DEVICE_INDEX=1
export __GLX_VENDOR_LIBRARY_NAME=mesa

function _host_accelleration(){
    LLVM=$(find /usr/lib -name "*LLVM*")
    for arg in $LLVM; do
        for var in $arg; do
            echo "$var"
        done
    done

    MESA=$(find /usr/lib -name "*mesa*.so*")
    for arg in $MESA; do
        for var in $arg; do
            echo "$var"
        done
    done

    D3D=$(find /usr/lib -name "*d3d*.so*")
    for arg in $D3D; do
        for var in $arg; do
            echo "$var"
        done
    done

    EGL=$(find /usr/lib -name "libEGL*" | grep -v "libEGL_mesa")
    for arg in $EGL; do
        for var in $arg; do
            echo "$var"
        done
    done

    STDC=$(find /usr/lib -name "*stdc*.so*")
    for arg in $STDC; do
        for var in $arg; do
            echo "$var"
        done
    done

    SWRAST=$(find /usr/lib -name "*swrast*")
    for arg in $SWRAST; do
        for var in $arg; do
            echo "$var"
        done
    done

    VULKAN=$(find /usr/lib -name "*vulkan*")
    for arg in $VULKAN; do
        for var in $arg; do
            echo "$var"
        done
    done
}

In the following step I'll try to list all the libraries found above into a file in ~/.cache (this step may be slower on some systems)

What to bind?

ACCELL_DRIVERS=$(echo $(echo "$(_host_accelleration)") | sed 's/ /:/g')
BINDLIBS=$(echo $(cat $HOME/.cache/hostdri2junest | uniq | sort -u) | sed 's/ /:/g')

rm -f $HOME/.cache/libbinds $HOME/.cache/libbindbinds
echo $ACCELL_DRIVERS | tr ":" "\n" >> $HOME/.cache/libbinds
echo $BINDLIBS | tr ":" "\n" >> $HOME/.cache/libbinds
for arg in $(cat $HOME/.cache/libbinds); do
    for var in "$arg"; do
        echo "$arg $(echo $arg | sed 's#/x86_64-linux-gnu##g' | cut -d/ -f1,2,3 )" >> $HOME/.cache/libbindbinds
        break
    done
done
sed -i -e 's#^#--bind / / --bind #' $HOME/.cache/libbindbinds

BINDS=$(cat $HOME/.cache/libbinds | tr "\n" " ")

EXTRA: trying to mount libLLVM host/guest (I've disabled this for now)

HOST_LIBLLVM=$(find /usr/lib -name "*libLLVM*" | grep -v ".so.")
JUNEST_LIBLLVM=$(find $JUNEST_HOME/usr/lib -name "*libLLVM*" | grep -v ".so.")

All I've done then was to recreate the structure of directories to bind in the AppImage (in my use case), for those that use JuNest normally, the directories should be automatically mounted, like this.

Where $HERE is the current directory I'm using

HERE="$(dirname "$(readlink -f $0)")"` 

and $EXEC is the name of the program I take from the "Exec=" entry in its .desktop file

EXEC=$(grep -e '^Exec=.*' "${HERE}"/*.desktop | head -n 1 | cut -d "=" -f 2- | sed -e 's|%.||g')

here is how a command with namespaces should be (in my experimente):

function _exec(){
    if [[ $VENDOR == *"NVIDIA"* ]]; then
        $HERE/.local/share/junest/bin/junest -n -b "$BINDS\
            --bind /usr/lib/ConsoleKit $JUNEST_HOME/usr/lib/ConsoleKit\
            --bind $DRIPATH $JUNEST_HOME/usr/lib/dri\
            --bind /usr/libexec $JUNEST_HOME/usr/libexec\
            --bind /usr/lib/firmware $JUNEST_HOME/usr/lib/firmware\
            --bind /usr/lib/modules $JUNEST_HOME/usr/lib/modules\
            --bind /usr/lib/nvidia $JUNEST_HOME/usr/lib/nvidia\
            --bind /usr/lib/systemd $JUNEST_HOME/usr/lib/systemd\
            --bind /usr/lib/udev $JUNEST_HOME/usr/lib/udev\
            --bind $VDPAUPATH $JUNEST_HOME/usr/lib/vdpau\
            --bind /usr/lib/xorg $JUNEST_HOME/usr/lib/xorg\
            --bind /usr/share/bug $JUNEST_HOME/usr/share/bug\
            --bind /usr/share/dbus-1 $JUNEST_HOME/usr/share/dbus-1\
            --bind /usr/share/doc $JUNEST_HOME/usr/share/doc\
            --bind /usr/share/egl $JUNEST_HOME/usr/share/egl\
            --bind /usr/share/glvnd $JUNEST_HOME/usr/share/glvnd\
            --bind /usr/share/lightdm $JUNEST_HOME/usr/share/lightdm\
            --bind /usr/share/lintian $JUNEST_HOME/usr/share/lintian\
            --bind /usr/share/man $JUNEST_HOME/usr/share/man\
            --bind /usr/share/nvidia $JUNEST_HOME/usr/share/nvidia\
            --bind /usr/share/vulkan $JUNEST_HOME/usr/share/vulkan\
            --bind /usr/src $JUNEST_HOME/usr/src\
            " -- $EXEC "$@"
    else
        $HERE/.local/share/junest/bin/junest -n -b "\
            --bind $DRIPATH $JUNEST_HOME/usr/lib/dri\
            --bind /usr/libexec $JUNEST_HOME/usr/libexec\
            --bind /usr/lib/modules $JUNEST_HOME/usr/lib/modules\
            --bind /usr/lib/xorg $JUNEST_HOME/usr/lib/xorg\
            --bind /usr/share/dbus-1 $JUNEST_HOME/usr/share/dbus-1\
            --bind /usr/share/glvnd $JUNEST_HOME/usr/share/glvnd\
            --bind /usr/share/vulkan $JUNEST_HOME/usr/share/vulkan\
            --bind /usr/src $JUNEST_HOME/usr/src\
            " -- $EXEC "$@"
    fi
}
_exec

For now the result is that I've no more many of the error messages I had previously.

I've NOT reached my goal, but I'm near to a solution.

Are there any pieces missing or perhaps I added too many in my attempt?

I can't say it myself, surely some of you can do better.

My search was not possible without:

A special thanks to @mirkobrombin that redirected me to the right path... I'm trying to finish this journey. I hope not alone.

ivan-hc commented 7 months ago

Update:

NOTE, I started the program "Bottles" with or without all the options listed in my first comment.

I also created an empty file and a directory in /dev and mouted them like this:

...
--bind /dev/dri $JUNEST_HOME/dev/dri\
--bind $DEV_NVIDIA $JUNEST_HOME/dev/nvidia\
...

where

DEV_NVIDIA=$(find /dev -name nvidia*[0-9]* 2> /dev/null | head -1)

and

HERE="$(dirname "$(readlink -f $0)")"
JUNEST_HOME=$HERE/.junest

All details of my script are avilable at https://github.com/ivan-hc/Bottles-appimage/blob/main/AppRun

Of course no 64bit game was able to run... but now that JuNest can see the Nvidia loader, I have an hope.

ivan-hc commented 7 months ago

is there a way to export the content of a directory to the guest? For example...

export VDPAU_LIBRARY_PATH=/usr/lib/vdapu

or something?

ivan-hc commented 7 months ago

UPDATE

# FIND THE VENDOR
VENDOR=$(glxinfo -B | grep "OpenGL vendor")
if ! echo "$VENDOR" | grep -q "*Intel*"; then
    export VK_ICD_FILENAMES="/usr/share/vulkan/icd.d/intel_icd.i686.json:/usr/share/vulkan/icd.d/intel_icd.x86_64.json"
    VENDORLIB="intel"
    export MESA_LOADER_DRIVER_OVERRIDE=$VENDORLIB
elif ! echo "$VENDOR" | grep -q "*NVIDIA*"; then
    export VK_ICD_FILENAMES=$(find /usr/share -name "*nvidia*json" | tr "\n" ":" | rev | cut -c 2- | rev)
    VENDORLIB="nvidia"
    export MESA_LOADER_DRIVER_OVERRIDE=$VENDORLIB
elif ! echo "$VENDOR" | grep -q "*Radeon*"; then
    export VK_ICD_FILENAMES="/usr/share/vulkan/icd.d/radeon_icd.i686.json:/usr/share/vulkan/icd.d/radeon_icd.x86_64.json"
    VENDORLIB="radeon"
    export MESA_LOADER_DRIVER_OVERRIDE=$VENDORLIB
fi

and

EXEC=$(grep -e '^Exec=.*' "${HERE}"/*.desktop | head -n 1 | cut -d "=" -f 2- | sed -e 's|%.||g')

if ! echo "$VENDOR" | grep -q "*NVIDIA*"; then
    echo "NVIDIA"
    $HERE/.local/share/junest/bin/junest -n -b "$ETC_RESOLV\
        --bind $(find /usr/lib -name libEGL.so* -type f) $(find $JUNEST_HOME/usr/lib -name libEGL.so* -type f)\
        --bind $(find /usr/lib -name libGLESv2* -type f) $(find $JUNEST_HOME/usr/lib -name libGLESv2* -type f)\
        --bind $(find /usr/lib -name *libEGL_mesa*.so* -type f) $(find $JUNEST_HOME/usr/lib -name *libEGL_mesa*.so* -type f)\
        --bind $(find /usr/lib -name *libGLX_mesa*.so* -type f) $(find $JUNEST_HOME/usr/lib -name *libGLX_mesa*.so* -type f)\
        --bind $(find /usr/lib -name *zink*_dri.so* -type f) $(find $JUNEST_HOME/usr/lib/dri -name *zink*_dri.so* -type f)\
        --bind $(find /usr/lib -maxdepth 2 -name vdpau) $(find $JUNEST_HOME/usr/lib -maxdepth 2 -name vdpau)\
        --bind $(find /usr/lib -name *nvidia*drv.so* -type f) /usr/lib/dri/nvidia_dri.so\
        --bind $(find /usr/lib -name *libvdpau_nvidia.so* -type f) /usr/lib/libvdpau_nvidia.so\
        " -- $EXEC "$@"
else
    $HERE/.local/share/junest/bin/junest -n -b "$ETC_RESOLV\
        --bind $(find /usr/lib -name libEGL.so* -type f) $(find $JUNEST_HOME/usr/lib -name libEGL.so* -type f)\
        --bind $(find /usr/lib -name libGLESv2* -type f) $(find $JUNEST_HOME/usr/lib -name libGLESv2* -type f)\
        --bind $(find /usr/lib -name *libEGL_mesa*.so* -type f) $(find $JUNEST_HOME/usr/lib -name *libEGL_mesa*.so* -type f)\
        --bind $(find /usr/lib -name *libGLX_mesa*.so* -type f) $(find $JUNEST_HOME/usr/lib -name *libGLX_mesa*.so* -type f)\
        --bind $(find /usr/lib -maxdepth 2 -name vdpau) $(find $JUNEST_HOME/usr/lib -maxdepth 2 -name vdpau)\
        " -- $EXEC "$@"
fi

The error message is changed. If I had a missing zink_dri.so before (now I've mounted it, see bove), now I get an error about a missing intel_dri.so, but mounting it not changes the fact that Hardware Accelleration is off.

ivan-hc commented 6 months ago

@fsquillace I think I'm near to a solution.

We should perform some tests by exporting two environment variables:

I've done a brief test, but I was stuck because I don't know the environment variable to point to the LD configuration file in /etc... it would be useful for me if you don't leave me test all these stuff all alone

EDIT: its enough to add these environment variables wherever you know in JuNest itself, we don't need to use an AppRun script like the one I use in my tests. This should be an inbuilt feature of JuNest, not an external one.

@fiftydinar join the issue

ivan-hc commented 5 months ago

It seems I'm the only one here who is interested in this topic. I am sorry.

Since my interest in implementing hardware acceleration in JuNest is primarily driven by my need to get AppImage packages working and exportable to various distributions, I think I will continue to address the problem on my "ArchImage" repository.

Those interested in contributing, please join and discuss it at https://github.com/ivan-hc/ArchImage/issues/20

@fsquillace I still can't stop thanking you for this amazing job you've done so far. I hope you will soon return to your project, which has nothing to envy of others.

fsquillace commented 5 months ago

@ivan-hc thanks for the kind words. Unfortunately I have little time to invest on this specific feature. As of now, I am mostly maintaining JuNest for bug fixes and little improvements as I do not have time to invest on larger features like this one. Having said that, I'd like to pursuit on keeping Junest as simple as possible, hence I am not yet sure that hardware acceleration support is strictly needed as part of the junest project as it can be something that anyone can build on top of that. I am trying to preserve simplicity over maintainance of large features following the Arch way. This is also because, having a clearer responsibility make things easier to maintain. The more code we add to such little project the harder will be its maintenance.

ivan-hc commented 5 months ago

@ivan-hc thanks for the kind words

@fsquillace I say the truth :)

I'd like to pursuit on keeping Junest as simple as possible

I totally agree with this. Maybe my research can be useful to add something to the readme file, as a troubleshooter.

Someone who wanted to use hardware acceleration without having to reinstall all the drivers in JuNest could simply install a few packages and export an environment variable.

As soon as I find out something, I will contact you to have this detail added to your README. Your project is too close to my heart. It's thanks to you that I was able to transform the impossible into AppImages.

fsquillace commented 5 months ago

Maybe my research can be useful to add something to the readme file, as a troubleshooter.

Yeah, maybe the junest wiki could be a good place.

btw, maybe something useful for you is that someone else wrote a blog about JuNest. Step 3 shows how to configure GPU drivers: https://medium.com/@ayaka_45434/installing-packages-on-linux-without-sudo-privilege-using-junest-5fe7523c9d86

ivan-hc commented 5 months ago

Yeah, maybe the junest wiki could be a good place.

btw, maybe something useful for you is that someone else wrote a blog about JuNest. Step 3 shows how to configure GPU drivers: https://medium.com/@ayaka_45434/installing-packages-on-linux-without-sudo-privilege-using-junest-5fe7523c9d86

I was aware of this solution (but not of the blog), I read it in one of the Issues. I was actually looking for a more portable solution, like the approach Distrobox takes, but without having to depend on Podman/Docker.

I've actually found that by installing libselinux in JuNest and exporting the local libraries to LD_LIBRARY_PATH, JuNest recognizes the existence of the libraries outside of the container, so I get fewer error messages related to (for example) GPU identification.

What is missing, however, is allowing the application to exploit this hardware acceleration.

NOTE: libselinux is essential in this process, if we export LD_LIBRARY_PATH without it, JuNest will not detect its internal libraries (errors in BUBBLEWRAP, which for JuNest will not exist) and the app will not launch.

I don't know if I should run " --bind " on a specific directory in the host or just "export" some environment variable. I just know that we are close to solving this problem.

Please see https://github.com/ivan-hc/ArchImage/issues/20