niess / python-appimage

AppImage distributions of Python
https://python-appimage.readthedocs.io/en/latest/
GNU General Public License v3.0
172 stars 24 forks source link

Cannot find Python.h #27

Closed JocelynDelalande closed 3 years ago

JocelynDelalande commented 3 years ago

I am trying to

Final goal is to produce an autonomous appimage of a python app

Bad stuff happens when gcc has to build stuff, it does not find Python.h :

  gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -DNETIFACES_VERSION=0.10.9 -DHAVE_GETIFADDRS=1 -DHAVE_GETNAMEINFO=1 -DHAVE_NETASH_ASH_H=1 -DHAVE_NETATALK_AT_H=1 -DHAVE_NETAX25_AX25_H=1 -DHAVE_NETECONET_EC_H=1 -DHAVE_NETIPX_IPX_H=1 -DHAVE_NETPACKET_PACKET_H=1 -DHAVE_LINUX_IRDA_H=1 -DHAVE_LINUX_ATM_H=1 -DHAVE_LINUX_LLC_H=1 -DHAVE_LINUX_TIPC_H=1 -DHAVE_LINUX_DN_H=1 -DHAVE_SOCKADDR_AT=1 -DHAVE_SOCKADDR_AX25=1 -DHAVE_SOCKADDR_IN=1 -DHAVE_SOCKADDR_IN6=1 -DHAVE_SOCKADDR_IPX=1 -DHAVE_SOCKADDR_UN=1 -DHAVE_SOCKADDR_ASH=1 -DHAVE_SOCKADDR_EC=1 -DHAVE_SOCKADDR_LL=1 -DHAVE_SOCKADDR_ATMPVC=1 -DHAVE_SOCKADDR_ATMSVC=1 -DHAVE_SOCKADDR_DN=1 -DHAVE_SOCKADDR_IRDA=1 -DHAVE_SOCKADDR_LLC=1 -DHAVE_PF_NETLINK=1 -I/tmp/appimage-build-lukpB4/AppDir/opt/python3.7/include/python3.7m -c netifaces.c -o build/temp.linux-x86_64-3.7/netifaces.o

  netifaces.c:1:20: fatal error: Python.h: No such file or directory

  compilation terminated.

  error: command 'gcc' failed with exit status 1

complete log : https://travis-ci.com/github/libreosteo/Libreosteo/jobs/386916555#L3369-L3372

What raise my attention is the -I/tmp/appimage-build-lukpB4/AppDir/opt/python3.7/include/python3.7m that seems to point to the wrong dir. Right path would be the same without the final "m".

I was able to workaround using a symlink /tmp/appimage-build-lukpB4/AppDir/opt/python3.7/include/python3.7m -> /tmp/appimage-build-lukpB4/AppDir/opt/python3.7/include/python3.7

But that remains a hack

My build script (failing) is:

#!/bin/bash

set -x
set -e

# requirements
#   yarn
#   rsync
#   git

# building in temporary directory to keep system clean
# use RAM disk if possible (as in: not building on CI system like Travis, and RAM disk is available)
if [ "$CI" == "" ] && [ -d /dev/shm ]; then
    TEMP_BASE=/dev/shm
else
    TEMP_BASE=/tmp
fi

PYTHON_APPIMAGE=python3.7.9-cp37-cp37m-manylinux1_x86_64.AppImage
PYTHON_APPIMAGE_URL=https://github.com/niess/python-appimage/releases/download/python3.7/${PYTHON_APPIMAGE}

export VERSION=develop-$(git rev-parse --short HEAD)
BUILD_DIR=$(mktemp -d -p "$TEMP_BASE" appimage-build-XXXXXX)
APP_DIR="$BUILD_DIR/AppDir"
mkdir $APP_DIR

# make sure to clean up build dir, even if errors occur
cleanup () {
    if [ -d "$BUILD_DIR" ]; then
        rm -rf "$BUILD_DIR"
    fi
}
trap cleanup EXIT

# Store repo root as variable
REPO_ROOT=$(dirname $(dirname  $(dirname $(realpath $0))))
OLD_CWD=$(readlink -f .)

# switch to build dir
pushd "$BUILD_DIR"

# Fetch a python relocatable installation
wget -c ${PYTHON_APPIMAGE_URL}
chmod +x ${PYTHON_APPIMAGE}
./${PYTHON_APPIMAGE} --appimage-extract

mv squashfs-root/usr $APP_DIR/usr
mv squashfs-root/opt $APP_DIR/opt
rm -rf squashfs-root/

# Pack required source code into AppDir
# Avoid using .git and other unrequired stuff
rsync -av "$REPO_ROOT/" "$APP_DIR/src" \
      --exclude '.git/' \
      --exclude-from="$REPO_ROOT/.gitignore"

# Install requirements (JS and Python)
pushd $APP_DIR
ls ./opt/python3.7/include/python3.7

./usr/bin/python3 -m pip install -r src/requirements/requirements.txt
yarn --cwd "$REPO_ROOT"
./usr/bin/python3 src/manage.py collectstatic --no-input

mkdir -p usr/share/metainfo
mv src/pkg/libreosteo.metainfo.xml usr/share/metainfo/

popd

# Get commit version from repository
pushd $REPO_ROOT
export VERSION=develop-$(git rev-parse --short HEAD)
popd

# Now, build AppImage using linuxdeploy
export ARCH=x86_64
wget -c https://github.com/linuxdeploy/linuxdeploy/releases/download/continuous/linuxdeploy-x86_64.AppImage
chmod +x linuxdeploy*.AppImage
./linuxdeploy-x86_64.AppImage \
  --appdir $APP_DIR \
  --icon-file $APP_DIR/src/libreosteoweb/static/images/libreosteo.png \
  --desktop-file $APP_DIR/src/pkg/libreosteo.desktop \
  --custom-apprun $APP_DIR/src/pkg/appimage/AppRun \
  --output appimage

# move built AppImage back into original CWD
mv LibreOsteo*.AppImage "$OLD_CWD/"
echo 'Hello LibreOsteo*.Appimage !'
srevinsaju commented 3 years ago

python-appimage does not provide Python development headers, and I do not think it needs to, for normal python programs. Thats what I do in pyappimage, see the Continuous Integration :

https://github.com/srevinsaju/pyappimage/blob/3515bfcc758cbec89415030bbca14ef59ec3c6f0/.github/workflows/continuous.yml#L42-L50

Adding libpython* would make the Python*.AppImage even bigger, I guess.

niess commented 3 years ago

Hello @JocelynDelalande,

Thank you for reporting this. Indeed the include path name was wrongly modified in some cases when it was copied from the docker container. I just pushed a patch were this should be solved. Please note that GitHub CI has not yet finished building the new AppImages at the time I am writing this.

As @srevinsaju said python-appimage was not intended / tested for your use case. It provides a relocatable copy of the manylinux Python installs and from there it is easy to pip install binary distributed packages (wheels).

Did you succeed to compile your packages after your hack? And did it run properly? I am concerned with two points:

  1. The AppImage does not package the Python library, only the runtime. So I'am puzzled about the linking stage. Yet, the manylinux installs also don't have any lib. So maybe that libs are actually linked/loaded from the Python runtime?

  2. Even though it succeeds compiling your package, if you do this on your host it will likely depend on the GLIBC version of your host. You can check this e.g. with objdump -p my_compiled_package.so. For example the manylinux1 Python runtime requires GLIBC_2.4 or higher. If your host uses a higher GLIBC than that your might reduce the portability of your app. A simple workaround would be to actually build your package/app with the manylinux Docker image. Then you could as well push a binary wheel of the built package to PyPI.

P.S. @srevinsaju Currently the development headers are included in the AppImage. Its 1.1M uncompressed, i.e. rather small w.r.t. the extracted AppImage (57M for Python 3.9). Therefore I decided to kept them since there might be use cases, e.g. when using the cffi package.

srevinsaju commented 3 years ago

I agree. I noticed some *.h files, and as you said, the size was too small when compressed. But the libpython*m*.so takes a considerable size if included. Thats what I was worried about.

Because, all users do not use libpython, and some of the users do not do thinning of the appimage. so that extra size might count for a small app like a Hello world app.

I am not sure, if the header files can be detected from the appimage, I have not tried to build a C app using Python headers from python-appimage, will opt/python3.8/include/ automatically be detected cc @niess?

JocelynDelalande commented 3 years ago

Thanks to you two !

As @srevinsaju said python-appimage was not intended / tested for your use case. It provides a relocatable copy of the manylinux Python installs and from there it is easy to pip install binary distributed packages (wheels).

Just to make sure, the no-feature discussed here is the ability to pip install things requiring compilation (with gcc stuff and so on), right ?

I just pushed a patch were this should be solved. Please note that GitHub CI has not yet finished building the new AppImages at the time I am writing this.

I will try it. Where will the python appimages pushed then ?

srevinsaju commented 3 years ago

I will try it. Where will the python appimages pushed then ?

To the releases :D

srevinsaju commented 3 years ago

@niess, its quite weird that the builds have not completed yet, I mean, some builds have not yet started. Is it normal?

niess commented 3 years ago

@JocelynDelalande @srevinsaju The builds are done now. I don't know why it was delayed that much. Maybe the system was saturated?

@JocelynDelalande You will find the new AppImages in the releases area, as previously.

Concerning the pip install of packages that need compilation. I just tried using a Python 3.9 AppImage for which I have no libs on my system and it seems to work. For this test I used the ercs package which requires libgsl-dev. So I would say that, for simple packages with no extra binary deps out of the Python runtime, bundling compiled Python packages in the AppImage with pip install is likely OK.

However if your compiled package links to external libs outside of the one used by the Python runtime then you will run into extra troubles. You might want to package these libs as well in your app in order to make it 100% standalone (zero install). But then you need to fetch version of these libs with high enough binary compatibility (i.e. using a low enough version of GLIBC). This can be done by using a manylinux Docker image for the build. Then you also need to set/modify the RPATH of your compiled Python package in order to locate the extra libs inside the AppImage. Note that python-appimage does not automate this process. If you feel confident enough, modifying the RPATH can be done manually using patchelf which is bundled with python-appimage. Tools like autowheel can also automate this process.

Note that once you have done the previous steps you actually gathered all the pieces needed for building a binary wheel of the Python package that you compiled. Then it could be worth to distribute those on PyPI as a wheel. So other people can directly pip install the binaries.

JocelynDelalande commented 3 years ago

@niess Thanks a lot for your guidance.

However if your compiled package links to external libs outside of the one used by the Python runtime then you will run into extra troubles. You might want to package these libs as well in your app in order to make it 100% standalone (zero install). But then you need to fetch version of these libs with high enough binary compatibility (i.e. using a low enough version of GLIBC). This can be done by using a manylinux Docker image for the build. Then you also need to set/modify the RPATH of your compiled Python package in order to locate the extra libs inside the AppImage. Note that python-appimage does not automate this process. If you feel confident enough, modifying the RPATH can be done manually using patchelf which is bundled with python-appimage. Tools like autowheel can also automate this process.

Do you have in mind any example of some project following this way (maybe using a CI, so that I can read a script) ?

niess commented 3 years ago

@JocelynDelalande I don't have an example with exactly your use case. However you could have a look at a bash script that I am using for building a binary wheel of a Python package linking to external libs (e.g. libpng) and custom C code. The problematic is similar. It runs on manylinux1 with Docker and uses audit_wheel (L40) in order to automatically package missing deps and patch the binaries RPATH inside the wheel (using patchelf under the hood).

This script is executed on GitHub's CI e.g. as here. The patched wheel is then uploaded to PyPI (L68).