Closed regro-cf-autotick-bot closed 3 years ago
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found it was in an excellent condition.
@conda-forge-admin, please rerender
@isuruf
Cuda runtime complains about gcc being too new. Is there a way I can force a downgrade from 9 to 8 that will survive rerendering?
/usr/local/cuda/bin/nvcc -I /usr/local/cuda/targets/x86_64-linux/include/ -Xcompiler -fPIC -Xcudafe --diag_suppress=unrecognized_attribute -gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_52,code=sm_52 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_61,code=sm_61 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_75,code=compute_75 -lineinfo -ccbin /home/conda/feedstock_root/build_artifacts/faiss-split_1606947926843/_build_env/bin/x86_64-conda-linux-gnu-c++ -std=c++14 -DFAISS_USE_FLOAT16 -I. -g -O3 -c gpu/GpuDistance.cu -o gpu/GpuDistance.o
In file included from /usr/local/cuda/targets/x86_64-linux/include/cuda_runtime.h:83,
from <command-line>:
/usr/local/cuda/targets/x86_64-linux/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported!
138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported!
| ^~~~~
I'm thinking of having a local conda_build_config.yaml
à la
# keep in sync with https://github.com/conda-forge/conda-forge-pinning-feedstock/blob/master/recipe/conda_build_config.yaml
cuda_compiler_version:
- None
- 9.2 # [linux64]
- 10.0 # [linux64]
- 10.1 # [linux64]
- 10.2 # [linux64]
- 11.0 # [linux64]
# keep in sync with https://github.com/conda-forge/conda-forge-pinning-feedstock/blob/master/recipe/conda_build_config.yaml
# with the restriction that cuda 9.2 requires gcc<=7, and cuda 10.x requires gcc<=8
cxx_compiler_version: # [unix]
- 11 # [osx]
- 9 # [linux]
- 7 # [linux64]
- 8 # [linux64]
- 8 # [linux64]
- 8 # [linux64]
- 9 # [linux64]
zip_keys: # [unix]
- - cuda_compiler_version # [unix]
- cxx_compiler_version # [unix]
However, I cannot get this to work despite trying a whole lot of different things due to either
ValueError: All entries associated by a zip_key field must be the same length. In C:\Users\[xxx]\AppData\Local\Temp\tmprwynlzyn\conda_build_config.yaml, cxx_compiler_version and cuda_compiler_version are different (1 and 5)
or
ValueError: variant config in C:\Users\[xxx]\Dev\conda-forge\faiss-split-feedstock\recipe\conda_build_config.yaml is ambiguous because it does not fully implement all zipped keys, or specifies a subspace that is not fully implemented.
Or would I need to implement this on the pinning-feedstock directly?
I have the impression that no matter what I set in the local conda_build_config.yaml
, it will always be overwritten by the conda-forge-pinning one and end up with conflicts.
I've also tried a conda_build_config.yaml
as follows
cxx_compiler_version:
- 7 # [cuda_compiler_version == "9.2"]
- 8 # [cuda_compiler_version == "10.0"]
- 8 # [cuda_compiler_version == "10.1"]
- 8 # [cuda_compiler_version == "10.2"]
- 9 # [cuda_compiler_version == "11.0"]
which renders fine, but doesn't pick up the changed compiler versions - guess it doesn't work without a zip_keys
. Ended up opening https://github.com/conda-forge/conda-forge-pinning-feedstock/issues/1000
@mdouze @beauby I'm finally making progress on #17 (1.6.4 + win), but in the meantime, this PR for building for python 3.9 has been failing consistently for OSX with:
parallel_mode=2
Faiss assertion '!qres || i > qres->qno' failed in void faiss::IndexIVF::range_search_preassigned(faiss::Index::idx_t, const float *, float, const faiss::Index::idx_t *, const float *, faiss::RangeSearchResult *) const at IndexIVF.cpp:581
/Users/runner/miniforge3/conda-bld/faiss-split_1606948108233/test_tmp/run_test.sh: line 8: 14654 Abort trap: 6 python -m unittest discover tests
Tests failed for faiss-1.6.3-py37ha6b20df_4_cpu.tar.bz2 - moving package to /Users/runner/miniforge3/conda-bld/broken
I'm surprised about that, because the same test suite was previously run successfully in CI, and I really have no idea what might be causing this. Of course, I can retry the python 3.9 stuff again once #17 is in, but I wanted to ask if this is something you've seen before, and maybe know what it's about?
Update: that error also showed up for 1.6.4, so I'm now testing the suspicion that this has something to do with the llvm-openmp
version.
This PR has been triggered in an effort to update python39.
Notes and instructions for merging this PR:
Please note that if you close this PR we presume that the feedstock has been rebuilt, so if you are going to perform the rebuild yourself don't close this PR until the your rebuild has been merged.
This package has the following downstream children:
And potentially more.
If this PR was opened in error or needs to be updated please add the
bot-rerun
label to this PR. The bot will close this PR and schedule another one. If you do not have permissions to add this label, you can use the phrase code>@<space/conda-forge-admin, please rerun bot in a PR comment to have theconda-forge-admin
add it for you.This PR was created by the regro-cf-autotick-bot. The regro-cf-autotick-bot is a service to automatically track the dependency graph, migrate packages, and propose package version updates for conda-forge. If you would like a local version of this bot, you might consider using rever. Rever is a tool for automating software releases and forms the backbone of the bot's conda-forge PRing capability. Rever is both conda (
conda install -c conda-forge rever
) and pip (pip install re-ver
) installable. Finally, feel free to drop us a line if there are any issues! This PR was generated by https://github.com/regro/autotick-bot/actions/runs/357506947, please use this URL for debugging