Open isuruf opened 1 year ago
Would be amazing to have pytest be noarch!
hypothesis might be a candidate as well. It recently got made yesarch again, but would presumably be noarch-capable if the django workaround for backports.zoneinfo arrives.
It's worth noting that with pytest
, it also has some conditional Windows dependencies. So we probably get 4 noarch
packages out of it. Still that is probably worthwhile.
Perhaps if we wanted to simplify that, we could turn the backport packages it depends on into noarch
packages with the Python version conditioning and then always depend on them in pytest
(or anywhere else).
And some more https://github.com/conda-forge/manhole-feedstock/pull/7 https://github.com/conda-forge/openpathsampling-feedstock/pull/21 https://github.com/conda-forge/curtsies-feedstock/pull/23
Also, the "CircleCI Pipeline" error pops up on all of them. Is there any way to remove this?
I think this package could be made a noarch platform, but would require a platform for every package.
Basically, it is a C library, that is loaded with ctypes into a python package: https://github.com/conda-forge/zaber-motion-feedstock
I made some queries on cs.github.com and found like... 100 feedstocks we could noarchify. I haven't reviewed all cases though, but I am more or less confident most would be eligible. Check the first message!
Slightly related project that popped up a couple of times but was never really taken up. If we can create one source of truth for packages where the selectors can be ignored, we could add them all to Grayskull, and I could just run Grayskull on that list.
There is already https://github.com/conda-incubator/grayskull/blob/6955333ee01f83ba6ae6e8dc76ce1f576e7c762e/grayskull/strategy/config.yaml#L465 and https://conda-forge.org/docs/maintainer/knowledge_base.html#non-version-specific-python-packages but it's not connected.
I can fix the circle stuff. We can delete the webhooks.
Folks, how are we handling the old jupyter extensions that have a pre/post script?
cc @xhochy
Folks, how are we handling the old jupyter extensions that have a pre/post script?
cc @bollwyvl
I made some queries on cs.github.com and found like... 100 feedstocks we could noarchify.
Poetry is already noarch
Poetry is already noarch
Looks like that just happened today. Likely the list was made before today and the box wasn't checked yet. Have now checked it
Folks, how are we handling the old jupyter extensions that have a pre/post script?
Haven't looked, but we can likely emit appropriate ${PREFIX}/share/jupyter
and ${PREFIX}/etc/jupyter/jupyter_notebook_config.d/<pkg_name>.json
that would have the same effect as whatever the scripts were doing.
I think this package could be made a noarch platform, but would require a platform for every package.
Basically, it is a C library, that is loaded with ctypes into a python package: https://github.com/conda-forge/zaber-motion-feedstock
@hmaarrfk, TomoPy is like this. I have one noarch python package which depends on platform specific packages which contain the shared libraries. Could have separate feedstocks, but I just have one and build the same noarch package every time.
I can fix the circle stuff. We can delete the webhooks.
@beckermr Is it enough to delete the webhooks at e.g. https://github.com/conda-forge/curtsies-feedstock/settings/hooks ?
Is it enough to delete the webhooks at e.g.
Is there any way someone outside of core can do that? I have so many packages that have that, pinging core each time is going to spam your inboxes...
Yes deleting the webhooks is fine.
@BastianZim If you want to write an admin migration to do that, feel free at conda-forge/admin-migrations.
Is there any way someone outside of core can do that? I have so many packages that have that, pinging core each time is going to spam your inboxes...
I'm OK if you ping me. Every migration we find tons of packages that can be noarch. Last one I converted dozens and it will be easier to merge someone else PR than doing it myself.
@BastianZim If you want to write an admin migration to do that, feel free at conda-forge/admin-migrations.
That would be quite complicated b/c the new semi-noarch recipes are not a one-size fits all.
If you want to write an admin migration to do that, feel free at conda-forge/admin-migrations.
Ahh true, forgot that. Is there any guideline on that? I've never written one.
I'm OK if you ping me.
Thank you! But the CircleCI stuff also appears in my normal feedstocks so it would be quite a lot...?
That would be quite complicated b/c the new semi-noarch recipes are not a one-size fits all.
I think this is just about removing the CircleCI hook?
I think this is just about removing the CircleCI hook?
Ah. Then it is definitely worth a try with a migrator.
Rest would be nice but probably impossible. Although, if we have a list of empty packages we can ignore (https://github.com/conda-forge/conda-forge.github.io/issues/1840#issuecomment-1297221907), I can run Grayskull on most and probably automate ~80% of the PRs to the point where they just need to be merged.
@ocefpaf I am talking about an admin migration at conda-forge/admin-migrations, not a bot migration.
org:conda-forge path:meta.yaml "- python" "- pip" "# [win" NOT compiler NOT noarch:
returns 240 matches in cs.github.com. These have the potential to be pure Python packages with Windows-only dependencies that are not currently noarch... A migrator might indeed be the best solution.
These feedstocks are using MxP jobs when they could be using 2... (M = number of enabled architectures, between 3 and 6; P = number of Pythons, currently 3 or 4). This is 9 jobs at best, 24 at worst!
org:conda-forge path:meta.yaml /pywin32\s+#\s+\[win/ NOT compiler NOT noarch:
gives you 20 results for potential noarch packages that are not just because they require pywin32
, but we could use the pywin32-on-windows
metapackage instead!
Couldn't we do the same thing with pywin32
dependents as other OS specific dependencies? Namely add an OS specific branch and still build with noarch
?
Yes, sure, that will always work. But it's a simpler modification to replace just pywin32
in those cases (and a single job instead of two). It could even be done at the repodata level if we had the right metadata.
Maybe we can move conda-forge-ci-setup
to noarch
Maybe we can move
conda-forge-ci-setup
tonoarch
I'm not sure that's compatible with work such as https://github.com/conda-forge/conda-forge-ci-setup-feedstock/pull/210? Or would the suggestion be - for debugging purposes - to switch off noarch when preparing a PR?
Have different thoughts on how to approach that as well, but that's a discussion for a different issue
Just an FYI:
conda 4.8
: __osx
conda 4.9
: __win
, __unix
conda 4.10
: __linux
We appear to still have people a non-negligible amount of users with conda 4.9
(more than all of <4.9
combined).
We thus might not want to make broad use of __linux
just yet.
(But our main use case here is to have a __win
/__unix
distinction so that shouldn't impose much of a restriction.)
Wonder if we could just create those as arch packages and stick them under a special label (like legacy
) and advise users on older Conda versions to use them. As they will encounter an error message about a missing package, we could put this in the docs under FAQ so it is easily discoverable.
@jakirkham, not a bad idea.
I'm +1 on offering (otherwise non-intrusive) ways to reduce the amount of brick walls users could hit when they want to upgrade old installations.
We'd have to do some testing beforehand, though (1. does anaconda.org/anaconda-client
still allow uploads of __*
packages, 2. how do conda
/mamba
behave when updated if there is still a conda-forge/label/legacy/__*
package installed, etc.).
Even if the uploads are blocked, we could include the recipe in the docs. It is probably a couple lines. Once built it should be installable from the local package cache.
Do we need the actual packages or can we mock them in a repodata patch? Just an idea in case that's easier to implement.
Thinking about this more I think it is on end users to build these packages themselves. Not all cases check simply for the existence of these packages. Some of them (like __osx
or __glibc
) check the version as well. So it really depends on end users to know their system and build these packages with the right versions baked in. That way they behave correctly when performing a solve.
Is the following issue known: https://github.com/conda-forge/spyder-kernels-feedstock/pull/87
Looking at what ipykernel does is probably relevant in the Spyder case. I figure we would have heard if it wasn't working.
Is https://github.com/conda-forge/xcb-proto-feedstock a candidate? It seem like it puts things outside the python directory so I'm not sure if it would work. But the noarch
platforms would help reduce the build matrix size quite a bit.
Is https://github.com/conda-forge/xcb-proto-feedstock a candidate?
Maybe but you'd need packages for all the platforms (six so far?).
By dropping support for python<3.8 or by using
noarch_platforms
as in https://github.com/conda-forge/conda-forge.github.io/pull/1839Here we collect some packages. PRs are welcome to help
Harder ones
@jaimergp: Feedstocks that have a
py{<,<=}3{6,7,8}
selector without compiler dependencies, still notnoarch
and nopython_impl
mentions (cs.github.com query). Search results say 110+ feedstocks but you can only retrieve 100 at a time so here we are. Some of them might not apply for other reasons( could not review all of them).cc @conda-forge/core