Closed klieret closed 2 years ago
Hi Kilian, you are right. It seems the new deploy pipeline (#73) generates a PyPI package that does not include the wcxf/bases
git submodule.
@dvandyk @jasonaebischerGIT
It does say with: submodules: true
here https://github.com/wilson-eft/wilson/blob/master/.github/workflows/check%2Bdeploy.yaml#L14-L18, so no idea why that doesn't work...
Hmm, according to the log from the deploy, the submodule was checked out
I upload a zip of all generated distributions as a Github Actions "artifact". For the new version, the artificat is available here: https://github.com/wilson-eft/wilson/suites/3444819328/artifacts/81704384
Checking the tarball (wilson-2.2.tar.gz), I find that the bases are present! Checking the wheel (wilson-2.2-py3-none-any.whl), I also find that the bases are present.
So, I don't know who uploaded the release to PyPI, but it was not the new Github workflow.
a) There is only a single tarbal on PyPi but now wheel. However, the workflow built both. b) There is no Github release, which triggers the upload to PyPI. c) The workflow log does list the "upload to PyPI" step as no executed.
In short, not my fault!
I think there is a problem with the workflow. It only triggers on pushes, but then checks for the presence of a tag. This will not work though when doing the usual thing on Github: pushing a final commit an then creating a release via the web interface.
I guess the workflow is missing something like
on:
release:
types:
- created
(please check, I'm just guessing.)
@jasonaebischerGIT, did you push the release to PyPI manually with twine?
I think there is a problem with the workflow. It only triggers on pushes, but then checks for the presence of a tag. This will not work though when doing the usual thing on Github: pushing a final commit an then creating a release via the web interface.
I guess the workflow is missing something like
on: release: types: - created
(please check, I'm just guessing.)
Ah, my bad. I have a different workflow. I use
@DavidMStraub I will check your suggested fix.
@DavidMStraub Yes, I pushed the release manually with twine. From your discussion I understand that this is not the right way to do it? I should it be done correctly?
Hi, it used to be the right way to do it, but Danny's new workflow should in principle do it automatically (only that it didn't).
Oh, I see. Thanks for the explanation
I'm also a bit confused by your workflow. Do you really wish to release on every push?
Asides from how you release, it also pays off to put additional tests that install the released package and then run the unittests. This would also fail if you e.g., forget to bundle a data file in the package or something else goes wrong during the packaging/release/installation.
I currently have the following setup:
Just to +1 this issue, the new wilson version also breaks new installs of flavio
and smelli
, so it would be good to get a fix.
How about you yank the release for now? This can be done as follows: Log in to pypi, go to your projects, click on "manage". Here's what I see for one of my projects:
Click on "options" for the broken release and then hit "yank"
What's a "yanked" release? A yanked release is a release that is always ignored by an installer, unless it is the only release that matches a version specifier (using either == or ===). See PEP 592 for more information.
This would avoid the trouble with all packages that depend on wilson because they wouldn't update automatically.
I'm also a bit confused by your workflow. Do you really wish to release on every push?
No, and the workflow doesn't do that. The step 'Upload to PyPI' is only run if: ${{ startsWith(github.ref, 'refs/tags/v') }}
,
i.e., if the tag starts with a v
.
Asides from how you release, it also pays off to put additional tests that install the released package and then run the unittests. This would also fail if you e.g., forget to bundle a data file in the package or something else goes wrong during the packaging/release/installation.
I have little experience with Python unit tests. How do you force nose
to run the tests "out of tree", as it were?
A quick google did not yield meaningful results, and I see that your example uses pytest
instead.
I currently have the following setup:
- Normal testing builds the package in one job, uploads it as an artifact, then downloads the package artifact and installs it under a variety of OS. This tests all the packaging, only the upload to pypi isn't tested.
Since we build a py3-none-any
wheel, using a varietys of OS would more test PyPI/pip than wilson.
However, I agree that it's a good idea to test installation of the wheel, and this should be done prior to uploading
to PyPi. Added.
- Release testing uploads to test pypi and installs from there, again testing various OS. It is manually run (can be only run once per version number). After the run you can also check testPypi to e.g. see that the readme there is rendered properly.
The idea of the workflow was to avoid having to manually deploy or manually run any aspect of the test/deployment.
No, and the workflow doesn't do that. The step 'Upload to PyPI' is only run
Oh, overlooked that :sweat_smile:
Since we build a py3-none-any wheel, using a varietys of OS would more test PyPI/pip than wilson.
No, this would build the package once and then install and test it under different OS. This would for example catch issues when paths are hard coded with slashes (would fail on Windows) or other file name subtleties. Admittedly it's rare that this catches additional bugs. But it happened to me a couple of times with some OS dependent things.
Not an expert on nose
unfortunately.
Not an expert on nose unfortunately.
I'd recommend using pytest nowadays. nose is no longer being maintained, and nose2 is much less popular/active IMO.
Hi @jasonaebischerGIT ,
since the latest release, ClusterKinG testing fails with:
see here.
It's not apparent to me from the latest release notes that some breaking changes were introduces, so this looks like a bug in
wilson
to me.Cheers, Kilian