Open hmaarrfk opened 2 years ago
Agree with @hmaarrfk. Please have a look at https://github.com/conda-forge/pytorch-cpu-feedstock/issues/114 too.
I already started a discussion about standardizing the archs that feedstocks target at the conda-forge.github.io repo https://github.com/conda-forge/conda-forge.github.io/issues/1901 I'd be happy to move the discussion there. I don't think the cuda-feedstock is the place for that discussion because it's not an issue with the cuda package itself, it's a discussoin about our channel policy and is more similar to whether on not packages should target special instruction sets like AVIX-512.
Comment:
This package currently requires more than 16 builds to be build manually to ensure that it completes in time on the CIs.
Step 1: No more git clone
rgommers identified that one portion of the build process that takes time is cloning the repository. In my experience, cloning the 1.5GB repo can take up to 10 min on my powerful local machine, but I feel like it can take much longer on the CIs.
To avoid cloning, we will have to list out all the submodule manually, or make the conda-forge installable dependencies.
I mostly got this working using a recursive script which should help us keep it maintained: https://github.com/conda-forge/pytorch-cpu-feedstock/pull/109
Option 1: Split off Dependencies:
pthreadpool
pthreadpool
on OSX.third_party
.fp16.py
cannot be found, the other withpsimd
.Option 2 - step 1: Build a libpytorch package or something
By setting
BUILD_PYTHON=OFF
in https://github.com/conda-forge/pytorch-cpu-feedstock/pull/112/ we then end up with the following libraries inlib
andinclude
:Option 2 - step 2: Depend on new ATen/libpytorch package
Compilation time progress
3933/4242
309 remaining)3897/4242
345 remaning)3924/4242
318 remaining)1656/1969
313 remaining3962/4242
280 remaining)There are approximately: