Closed JocelynDelalande closed 3 years ago
python-appimage
does not provide Python development headers, and I do not think it needs to, for normal python programs. Thats what I do in pyappimage
, see the Continuous Integration :
Adding libpython*
would make the Python*.AppImage even bigger, I guess.
Hello @JocelynDelalande,
Thank you for reporting this. Indeed the include path name was wrongly modified in some cases when it was copied from the docker container. I just pushed a patch were this should be solved. Please note that GitHub CI has not yet finished building the new AppImages at the time I am writing this.
As @srevinsaju said python-appimage
was not intended / tested for your use case. It provides a relocatable copy of the manylinux Python installs and from there it is easy to pip install binary distributed packages (wheels).
Did you succeed to compile your packages after your hack? And did it run properly? I am concerned with two points:
The AppImage does not package the Python library, only the runtime. So I'am puzzled about the linking stage. Yet, the manylinux installs also don't have any lib. So maybe that libs are actually linked/loaded from the Python runtime?
Even though it succeeds compiling your package, if you do this on your host it will likely depend on the GLIBC version of your host. You can check this e.g. with objdump -p my_compiled_package.so
. For example the manylinux1 Python runtime requires GLIBC_2.4 or higher. If your host uses a higher GLIBC than that your might reduce the portability of your app. A simple workaround would be to actually build your package/app with the manylinux Docker image. Then you could as well push a binary wheel of the built package to PyPI.
P.S. @srevinsaju Currently the development headers are included in the AppImage. Its 1.1M uncompressed, i.e. rather small w.r.t. the extracted AppImage (57M for Python 3.9). Therefore I decided to kept them since there might be use cases, e.g. when using the cffi package.
I agree. I noticed some *.h
files, and as you said, the size was too small when compressed. But the libpython*m*.so
takes a considerable size if included. Thats what I was worried about.
Because, all users do not use libpython, and some of the users do not do thinning
of the appimage. so that extra size might count for a small app like a Hello world app.
I am not sure, if the header files can be detected from the appimage, I have not tried to build a C app using Python headers from python-appimage, will opt/python3.8/include/
automatically be detected cc @niess?
Thanks to you two !
As @srevinsaju said
python-appimage
was not intended / tested for your use case. It provides a relocatable copy of the manylinux Python installs and from there it is easy to pip install binary distributed packages (wheels).
Just to make sure, the no-feature discussed here is the ability to pip install things requiring compilation (with gcc stuff and so on), right ?
I just pushed a patch were this should be solved. Please note that GitHub CI has not yet finished building the new AppImages at the time I am writing this.
I will try it. Where will the python appimages pushed then ?
I will try it. Where will the python appimages pushed then ?
To the releases :D
@niess, its quite weird that the builds have not completed yet, I mean, some builds have not yet started. Is it normal?
@JocelynDelalande @srevinsaju The builds are done now. I don't know why it was delayed that much. Maybe the system was saturated?
@JocelynDelalande You will find the new AppImages in the releases area, as previously.
Concerning the pip install
of packages that need compilation. I just tried using a Python 3.9 AppImage for which I have no libs on my system and it seems to work. For this test I used the ercs
package which requires libgsl-dev
. So I would say that, for simple packages with no extra binary deps out of the Python runtime, bundling compiled Python packages in the AppImage with pip install
is likely OK.
However if your compiled package links to external libs outside of the one used by the Python runtime then you will run into extra troubles. You might want to package these libs as well in your app in order to make it 100% standalone (zero install). But then you need to fetch version of these libs with high enough binary compatibility (i.e. using a low enough version of GLIBC). This can be done by using a manylinux Docker image for the build. Then you also need to set/modify the RPATH of your compiled Python package in order to locate the extra libs inside the AppImage. Note that python-appimage
does not automate this process. If you feel confident enough, modifying the RPATH can be done manually using patchelf
which is bundled with python-appimage
. Tools like autowheel
can also automate this process.
Note that once you have done the previous steps you actually gathered all the pieces needed for building a binary wheel of the Python package that you compiled. Then it could be worth to distribute those on PyPI as a wheel. So other people can directly pip install
the binaries.
@niess Thanks a lot for your guidance.
However if your compiled package links to external libs outside of the one used by the Python runtime then you will run into extra troubles. You might want to package these libs as well in your app in order to make it 100% standalone (zero install). But then you need to fetch version of these libs with high enough binary compatibility (i.e. using a low enough version of GLIBC). This can be done by using a manylinux Docker image for the build. Then you also need to set/modify the RPATH of your compiled Python package in order to locate the extra libs inside the AppImage. Note that python-appimage does not automate this process. If you feel confident enough, modifying the RPATH can be done manually using patchelf which is bundled with python-appimage. Tools like autowheel can also automate this process.
Do you have in mind any example of some project following this way (maybe using a CI, so that I can read a script) ?
@JocelynDelalande I don't have an example with exactly your use case. However you could have a look at a bash script that I am using for building a binary wheel of a Python package linking to external libs (e.g. libpng
) and custom C code. The problematic is similar. It runs on manylinux1 with Docker and uses audit_wheel
(L40) in order to automatically package missing deps and patch the binaries RPATH inside the wheel (using patchelf
under the hood).
This script is executed on GitHub's CI e.g. as here. The patched wheel is then uploaded to PyPI (L68).
I am trying to
Final goal is to produce an autonomous appimage of a python app
Bad stuff happens when gcc has to build stuff, it does not find Python.h :
complete log : https://travis-ci.com/github/libreosteo/Libreosteo/jobs/386916555#L3369-L3372
What raise my attention is the
-I/tmp/appimage-build-lukpB4/AppDir/opt/python3.7/include/python3.7m
that seems to point to the wrong dir. Right path would be the same without the final "m".I was able to workaround using a symlink /tmp/appimage-build-lukpB4/AppDir/opt/python3.7/include/python3.7m -> /tmp/appimage-build-lukpB4/AppDir/opt/python3.7/include/python3.7
But that remains a hack
My build script (failing) is: