Open arjxn-py opened 7 months ago
I just released v0.125.0, which brings some changes to how wheels are named and a lot of other improvements. It might or might not break things here – I would suggest testing against the new changes on main
here and rebasing when you are ready to come back to this.
Brief Update - A preliminary Dockerfile has been set up, Parameterization of Dockerfile for dynamic version installation remains, plus there are some issues while activating venv, I'm on it :)
Sure, could we skip the venv? I think we do not plan to add any external dependencies from PyPI, so we won't ever need the utilities of the isolation that a virtual environment provides, because the Docker container would provide that. Installing straight with the system pip
should be alright.
Parameterization of Dockerfile for dynamic version installation remains
I'm not sure I follow. Do you mean installing multiple Hugo versions? I think just one Hugo installation with one Python version is okay (for me, at least).
I'm not sure I follow. Do you mean installing multiple Hugo versions?
No no, by this I meant about the version of go
& zig
. I should have specified that above 😅
Fair enough. I don't have any release automation for bumping versions set up right now given how minimal the entire project is, so it will be okay to keep the Zig and Go tarballs hardcoded (as they currently are).
We can't really control the C compiler that we receive from the system package manager. However, I determine the Go toolchain version from the notes for every Hugo release. That is more important to keep in sync because of security concerns; sometimes some CVEs are fixed – so I make sure to do so in all relevant files when bumping the Hugo version (which is at the time of publishing a release).
Through #81 I added some non-GitHub Linux aarch64/arm64 runners (the capacity is limited to 800 minutes per month or so, as I am on 5$ of free credit as a trial). Feel free to use them in a workflow or two here in your PR (won't work on your fork); I would happily approve workflow runs for them. It will be much better than running QEMU and waiting for 40 minutes to build.
Through #81 I added some non-GitHub Linux aarch64/arm64 runners (the capacity is limited to 800 minutes per month or so, as I am on 5$ of free credit as a trial). Feel free to use them in a workflow or two here in your PR (won't work on your fork); I would happily approve workflow runs for them. It will be much better than running QEMU and waiting for 40 minutes to build.
Thanks for this @agriyakhetarpal, I am in the process to define +1 workflow for Docker images hence it is nicer to use aarch64/arm64 runners instead, I anticipate that the capacity wouldn't be an issue at the moment as it takes ~10 minutes to build the image for me locally. Plus, as you suggested, I've gotten rid of venv for now and added hardcoded tarballs for go & zig.
Further context: I had previously tried MUSL wheels in PRs related to #91 and received a few segmentation faults. I didn't do a debug build and look at them in detail, though. Maybe we'll need to change BUILD_FOR_WINDOWS
to BUILD_STATIC
or something to be able to build a static library. If that doesn't work, we can publish a GLIBC-based image.
I think I will rely on https://github.com/gohugoio/hugo/issues/10760, which has now been closed through https://github.com/gohugoio/hugo/pull/12734 for the official releases of Hugo, so I'm not sure if we need this implementation or need to publish our own images. We can provide these for development purposes, of course, so I'll keep this PR open if anyone ever needs this functionality – it's only useful for cross-compilation or testing Linux stuff locally on a non-Linux machine, and compilers are readily available on all platforms (whether through conda-forge/miniconda, MinGW, Homebrew, etc.) – I think that sort of stuff could be nice to try in CI
Still a work in progress. This PR aims to add a Dockerfile to containerize the package & publish Docker images of the same.
Related to #74