Open h-vetinari opened 4 months ago
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found it was in an excellent condition.
(CI stopped to conserve resources; we can discuss first)
Lets wait for a few more stakeholders.
It would also be good to take another stab at fixing https://github.com/conda-forge/pytorch-cpu-feedstock/issues/155; I updated the list and opened some issues for hopefully higher visibility.
It's worth noting though that we already added linter hints for pytorch-{cpu,gpu}
as of https://github.com/conda-forge/conda-forge-pinning-feedstock/commit/c00c43a2f809d3836e83cbf721bd93d58366924f almost a year ago
How do you propose end users select the CPU/GPU version now?
How do you propose end users select the CPU/GPU version now?
mamba install pytorch
should allow them to get the "best version" pretty consistently.
otherwise
mamba install pytorch=*=cuda*
# vs
mamba install pytorch=*=cpu_*
mamba install pytorch=*=cuda*
# vs
mamba install pytorch=*=cpu_*
This is a bit verbose than
mamba install pytorch-gpu
# vs
mamba install pytorch-cpu
What's the harm in continuing to support these convenience outputs? Is it a burden to maintain these?
pytorch channel has the convenience outputs
mamba install pytorch pytorch-cuda=x
vs
mamba install pytorch cpuonly
The original intention was to match the naming of the pytorch channel so users don't accidentally coinstall the same package with different names from different channels.
So now the upstream naming has diverged, I think those outputs are barely worth it anymore.
Already, the ideal should be IMO that user choice is unnecessary, in the sense that they get the "best" implementation for their system; for the cases where a choice is necessary, the build string option is not amazing, but possible (similar to how we're advising users to set libblas=*=*<impl>
)
If we want to continue providing convenience wrappers for that, then IMO we should match the pytorch naming (except cpuonly
), e.g. there's also pytorch-mutex
that they use for that.
Matching pytorch-cuda
would be attractive in principle, if only it didn't involve a completely different version scheme...
Long term, I'd wish for a better API to express these things, without messing with pointless outputs or build strings.
IMO, just dropping this output and wasting user's time to figure out that you now need to say pytorch=*=*_cpu
is not worth it
especially since having these are not a huge maintenance burden.
While reviewing another PR, I checked the situation with regarding
pytorch-{cpu,gpu}
again, and IMO it's way past time to remove them. Quoting from thereWe might want to wait for a new minor version to do this.
PS. We've been emitting linter hints for these since https://github.com/conda-forge/conda-forge-pinning-feedstock/commit/c00c43a2f809d3836e83cbf721bd93d58366924f