Closed grlee77 closed 2 years ago
there are a lot of unrelated test failures related to coverage that I will have to take a look at.
The status of the wheel builds can be seen in my fork
Could use feedback on which manylinux
we should be using (many cases are still manylinux1
).
These jobs are crazy slow:
Looks like they're all using QEMU instead of only the arm64 job - I hope that is not on purpose?
Looks like they're all using QEMU instead of only the arm64 job - I hope that is not on purpose?
It seems to be just running tests on aarch64 that is taking 80-90% of the time.
Example for x86_64
Building cp39-manylinux_x86_64 wheel
CPython 3.9 manylinux x86_64
Setting up build environment...
✓ 0.11s
Building wheel...
✓ 54.20s
Repairing wheel...
✓ 1.49s
Testing wheel...
✓ 96.92s
but for aarch64
Building cp39-manylinux_aarch64 wheel
CPython 3.9 manylinux aarch64
Setting up build environment...
✓ 1.09s
Building wheel...
✓ 644.47s
Repairing wheel...
✓ 10.83s
Testing wheel...
✓ 3253.64s
For some reason on Python 3.10 the aarch64 tests took even longer (2.5 hours!)
Those 4 linux jobs in the screenshot are divided among Python versions. But each version runs on three architextures (x86_64, i686 and aarch64). It is mainly the aarch64 one that is taking a long time.
Perhaps we can conditionally skip running the full test suite on aarch64 and just run a simple test script checking the package import and a few simple calls.
each version runs on three architextures (x86_64, i686 and aarch64). It is mainly the aarch64 one that is taking a long time.
Ah that makes sense.
Perhaps we can conditionally skip running the full test suite on aarch64 and just run a simple test script checking the package import and a few simple calls.
If it's just once per release, then it's not a problem. But if it runs more often, I agree we should skip it by default.
If it's just once per release, then it's not a problem. But if it runs more often, I agree we should skip it by default.
Yeah, it is just for releases (or if we push a buildwheel
tag to test things). Let's just leave it as-is. IIRC, the timeout is 5 or 6 hours, so we are still not too close to that.
This is probably ready to go unless you want me to modify which wheels get built on macOS. I think some other projects are shipping only the universal2
wheels, so do we want to do the same?
I removed 3.10 from this PR and will and it in a follow-up.
This is probably ready to go unless you want me to modify which wheels get built on macOS. I think some other projects are shipping only the
universal2
wheels, so do we want to do the same?
I think thin wheels are good to have, they are smaller and therefore should be preferred over universal2
in the long run. The choice for universal2
in Python packaging was mostly out of convenience (less changes to make because the old universal support was already present), but from a technical perspective it doesn't make too much sense to almost double the size of wheels for some convenience.
This PR is adapted from scikit-image/scikit-image#5397.
Apparently we had added
pyproject.toml
here long ago and then removed it due to some issues withpip
at that time. This PR reintroduces it so that the correct NumPy version will be used during these builds (it is based on SciPy's, but without the Pythran or Pybind11 requirements).This also has automated deployment when pushing a version tag. If the automated deploy fails, we can still retrieve the wheels from the artifacts.