conda-forge / pytorch-cpu-feedstock

A conda-smithy repository for pytorch-cpu.
BSD 3-Clause "New" or "Revised" License
18 stars 50 forks source link

remove `pytorch-{cpu,gpu}` compat outputs #247

Open h-vetinari opened 4 months ago

h-vetinari commented 4 months ago

While reviewing another PR, I checked the situation with regarding pytorch-{cpu,gpu} again, and IMO it's way past time to remove them. Quoting from there

They were purely for compatibility with the old naming on the pytorch channel, but we've lost track of them completely.

In the pytorch channel, pytorch-cpu hasn't had a new build in 5(!) years, and the gpu variant doesn't exist anymore(?!). There's pytorch-cuda, but that's only a meta-package for the various CUDA components, not pytorch itself. You can check out the upstream recipe, which now also features a build variant.

We might want to wait for a new minor version to do this.

PS. We've been emitting linter hints for these since https://github.com/conda-forge/conda-forge-pinning-feedstock/commit/c00c43a2f809d3836e83cbf721bd93d58366924f

conda-forge-webservices[bot] commented 4 months ago

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

h-vetinari commented 4 months ago

(CI stopped to conserve resources; we can discuss first)

hmaarrfk commented 4 months ago

Lets wait for a few more stakeholders.

h-vetinari commented 4 months ago

It would also be good to take another stab at fixing https://github.com/conda-forge/pytorch-cpu-feedstock/issues/155; I updated the list and opened some issues for hopefully higher visibility.

It's worth noting though that we already added linter hints for pytorch-{cpu,gpu} as of https://github.com/conda-forge/conda-forge-pinning-feedstock/commit/c00c43a2f809d3836e83cbf721bd93d58366924f almost a year ago

isuruf commented 4 months ago

How do you propose end users select the CPU/GPU version now?

hmaarrfk commented 4 months ago

How do you propose end users select the CPU/GPU version now?

mamba install pytorch

should allow them to get the "best version" pretty consistently.

otherwise

mamba install pytorch=*=cuda*
# vs
mamba install pytorch=*=cpu_*
isuruf commented 4 months ago
mamba install pytorch=*=cuda*
# vs
mamba install pytorch=*=cpu_*

This is a bit verbose than

mamba install pytorch-gpu
# vs
mamba install pytorch-cpu

What's the harm in continuing to support these convenience outputs? Is it a burden to maintain these?

pytorch channel has the convenience outputs

mamba install pytorch pytorch-cuda=x
vs
mamba install pytorch cpuonly
h-vetinari commented 4 months ago

The original intention was to match the naming of the pytorch channel so users don't accidentally coinstall the same package with different names from different channels.

So now the upstream naming has diverged, I think those outputs are barely worth it anymore.

Already, the ideal should be IMO that user choice is unnecessary, in the sense that they get the "best" implementation for their system; for the cases where a choice is necessary, the build string option is not amazing, but possible (similar to how we're advising users to set libblas=*=*<impl>)

If we want to continue providing convenience wrappers for that, then IMO we should match the pytorch naming (except cpuonly), e.g. there's also pytorch-mutex that they use for that.

Matching pytorch-cuda would be attractive in principle, if only it didn't involve a completely different version scheme...

Long term, I'd wish for a better API to express these things, without messing with pointless outputs or build strings.

isuruf commented 4 months ago

IMO, just dropping this output and wasting user's time to figure out that you now need to say pytorch=*=*_cpu is not worth it especially since having these are not a huge maintenance burden.