conda-forge / pytorch-cpu-feedstock

A conda-smithy repository for pytorch-cpu.
BSD 3-Clause "New" or "Revised" License
19 stars 50 forks source link

Fix pytorch-{cpu,gpu} packages are built only for single python version #281

Closed jeongseok-meta closed 3 weeks ago

jeongseok-meta commented 3 weeks ago

image

That said this seems to be a problem with the 2.4.1 package as well and likely for many of the megabuild packages.... https://conda-metadata-app.streamlit.app/?q=conda-forge%2Flinux-64%2Fpytorch-cpu-2.4.1-cpu_mkl_py39h09a6fac_103.conda

Originally posted by @hmaarrfk in https://github.com/conda-forge/pytorch-cpu-feedstock/issues/277#issuecomment-2453503103

hmaarrfk commented 3 weeks ago

I think the following patch will work:

diff --git a/recipe/meta.yaml b/recipe/meta.yaml
index 46a7507..e56535c 100644
--- a/recipe/meta.yaml
+++ b/recipe/meta.yaml
@@ -347,7 +347,9 @@ outputs:
         - pytorch-cpu                                      # [cuda_compiler_version == "None"]
     requirements:
       run:
-        - {{ pin_subpackage("pytorch", exact=True) }}
+        - pytorch {{ version }}=cuda_{{ blas_impl }}*{{ PKG_BUILDNUM }}   # [megabuild and cuda_compiler_version != "None"]
+        - pytorch {{ version }}=cpu_{{ blas_impl }}*{{ PKG_BUILDNUM }}    # [megabuild and cuda_compiler_version == "None"]
+        - {{ pin_subpackage("pytorch", exact=True) }}                     # [not megabuild]
     test:
       imports:
         - torch
hmaarrfk commented 3 weeks ago

Hmm we might need the build string to also be "resolved"

diff --git a/recipe/meta.yaml b/recipe/meta.yaml
index 46a7507..7fbee9f 100644
--- a/recipe/meta.yaml
+++ b/recipe/meta.yaml
@@ -338,8 +338,10 @@ outputs:
   {% set pytorch_cpu_gpu = "pytorch-gpu" %}   # [cuda_compiler_version != "None"]
   - name: {{ pytorch_cpu_gpu }}
     build:
-      string: cuda{{ cuda_compiler_version | replace('.', '') }}py{{ CONDA_PY }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}  # [cuda_compiler_version != "None"]
-      string: cpu_{{ blas_impl }}_py{{ CONDA_PY }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}                                      # [cuda_compiler_version == "None"]
+      string: cuda{{ cuda_compiler_version | replace('.', '') }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}                  # [megabuild and cuda_compiler_version != "None"]
+      string: cpu_{{ blas_impl }}_h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}                                                # [megabuild and cuda_compiler_version == "None"]
+      string: cuda{{ cuda_compiler_version | replace('.', '') }}py{{ CONDA_PY }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}  # [not megabuild and cuda_compiler_version != "None"]
+      string: cpu_{{ blas_impl }}_py{{ CONDA_PY }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}                                # [not megabuild and cuda_compiler_version == "None"]
       detect_binary_files_with_prefix: false
       skip: true  # [cuda_compiler_version != "None" and linux64 and blas_impl != "mkl"]
       # weigh down cpu implementation and give cuda preference
@@ -347,7 +349,9 @@ outputs:
         - pytorch-cpu                                      # [cuda_compiler_version == "None"]
     requirements:
       run:
-        - {{ pin_subpackage("pytorch", exact=True) }}
+        - pytorch {{ version }}=cuda_{{ blas_impl }}*{{ PKG_BUILDNUM }}   # [megabuild and cuda_compiler_version != "None"]
+        - pytorch {{ version }}=cpu_{{ blas_impl }}*{{ PKG_BUILDNUM }}    # [megabuild and cuda_compiler_version == "None"]
+        - {{ pin_subpackage("pytorch", exact=True) }}                     # [not megabuild]
     test:
       imports:
         - torch