storpipfugl / pykdtree

Fast kd-tree implementation in Python
GNU Lesser General Public License v3.0
221 stars 46 forks source link

Add Github Action to build wheels #52

Closed bwoodsend closed 4 years ago

bwoodsend commented 4 years ago

CI workflow to build wheels for PyPi for Windows/macOS/manylinux, Pythons 3.5-3.8. Fixes #33.

Uses Github Actions which, unlike Appveyor, supports Docker (allowing manylinux) and doesn't require setting up an organisation like Azure Pipelines does. Latest build is here.

Wheels are:

For some reason I couldn't get nose to find the tests on Windows and macOS so I swapped nose for pytest --pyargs pykdtree which does what we want it to.

I've tried to annotate it as best I can. Let me know if anything needs clarifying...

djhoese commented 4 years ago

Based on https://docs.github.com/en/github/setting-up-and-managing-billing-and-payments-on-github/about-billing-for-github-actions we'd be limited to 2000 minutes per month, right? Are artifacts automatically removed since it seems we'd also be limited to 500MB?

Edit: Should have started with: Nice job! Thanks for putting the work in to this. The other option for me was going to be to use Travis with cibuildwheel, but I never got around to doing it.

bwoodsend commented 4 years ago

The 2000 minutes limit only applies to private repositories (at least it says so here - the docs are a bit unclear). I know I've well exceeded that quota in other public repos without any issues. I imagine the storage limit follows the same rules.

Build artefacts expire after 90 days (see here). Given that it runs per push and that this repo isn't getting many pushes currently I doubt it would be much of an issue anyway.

mraspaud commented 4 years ago

I'm good with giving a try. I'll merge this tomorrow and will see if I can configure the secret variables.

bwoodsend commented 4 years ago

Not to be a total pest but there are still no wheels on PyPI. Any chance you could run it?

mraspaud commented 4 years ago

@bwoodsend The admin of this package hasn't answered yet. I'll try to contact him directly.