Closed regro-cf-autotick-bot closed 1 year ago
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found it was in an excellent condition.
@isuruf
There's a problem with this feedstock when using clang 11, see https://github.com/conda-forge/faiss-split-feedstock/pull/33 & https://github.com/facebookresearch/faiss/issues/1696, so I wouldn't be comfortable to just relax the pin for osx-arm without having CI.
Would you accept PRs to https://github.com/conda-forge/clang-compiler-activation-feedstock et al, for clang 10 on osx-arm (to branches 10.x that would have to be created off master)?
Would you accept PRs to https://github.com/conda-forge/clang-compiler-activation-feedstock et al, for clang 10 on osx-arm (to branches 10.x that would have to be created off master)?
Nope. Clang 10 doesn't support osx-arm64 fully.
OK, thanks @isuruf.
Are (some) CMake errors on osx-arm expected/known? Any idea what might be causing the following (build-pkg.sh
works for osx-64 & linux, and has no osx-specific codepaths...)?
CMake Error at /Users/runner/miniforge3/conda-bld/faiss-split_1616536094012/_build_env/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:218 (message):
Could NOT find Python (missing: Python_NumPy_INCLUDE_DIRS NumPy) (found
version "2.7.16")
@conda-forge/help-osx-arm64
Could someone with an M1 help me here by downloading the artefact & running the test suite - e.g. run conda build --test <path/to/unpacked/folder>/osx-arm64/faiss-1.7.0-py38h<hash>_cpu.tar
and post the results here?
The osx-x86 version currently needs clang 10 due to a failing test with clang 11 (but osx-arm64 cannot build with clang 10), so I really want to see this build pass the test suite before merging.
Output of conda build --test faiss-1.7.0-py38had7eb21_6_cpu.tar.bz2 2>&1
: https://gist.github.com/xhochy/7e942d1fe5c67b698d24ecb8e71796b1
TL;DR You need to test with a different blas
implementation on osx-arm64
.
OK, thanks a lot! This is just failing on the mkl-dep of the test recipe. I'll add a commit to remove that for osx-arm64.
It would also be possible to run this against the upstream test suite as follows (but happy to try again with a new artefact after the CI runs through!):
git clone https://github.com/facebookresearch/faiss
cd faiss
git checkout tags/1.7.0
cd .. # make sure we're picking up faiss from env, and not from local folder
conda create -n faiss-env -c <path-to-unpacked-artefact> python=3.8 faiss scipy pytest
conda activate faiss-env
pytest faiss/tests -v
@xhochy, could you try again, please? 🙃
Unit tests fail with:
tests/test_index_accuracy.py::TestSQFlavors::test_SQ_IP WARNING clustering 2000 points to 64 centroids: please provide at least 2496 training points
(0, '8bit'): 984,
radius 10.26676082611084
ndiff 0 / 141
parallel_mode=1
sizes [2 0 0 0 2 0 0 0 9 3 0 1 4 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 9 0 0 0 0 0 0 0 0
0 0 0 3 0 2 2 0 0 0 1 2 0 0 0 0 4 0 0 1 0 4 0 2 2 2 3 0 7 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 3 1 0 1 0 0 0 0 0 2 0 0 5 0 0 0 5 0 0 1 0 3 0 0 0 2 2 0 0 0
0 0 0 3 2 1 4 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 1 0 2 5 0 0 0 0 0 0
0 0 2 0 2 0 1 0 0 0 1 2 0 0 0 0 0 1 0 0 2 0 0 1 1 1 0 0 0 0 0 0 0 0 1 0 0
1 0 0 6 0 0 0 0 0 0 0 0 0 0 0]
parallel_mode=2
Faiss assertion '!qres || i > qres->qno' failed in void faiss::IndexIVF::range_search_preassigned(faiss::Index::idx_t, const float *, float, const faiss::Index::idx_t *, const float *, faiss::RangeSearchResult *, bool, const faiss::IVFSearchParameters *, faiss::IndexIVFStats *) const at /Users/runner/miniforge3/conda-bld/faiss-split_1618392461839/work/faiss/IndexIVF.cpp:724
Process 22906 stopped
* thread #3, stop reason = signal SIGABRT
frame #0: 0x000000019a958cec libsystem_kernel.dylib`__pthread_kill + 8
libsystem_kernel.dylib`__pthread_kill:
-> 0x19a958cec <+8>: b.lo 0x19a958d0c ; <+40>
0x19a958cf0 <+12>: pacibsp
0x19a958cf4 <+16>: stp x29, x30, [sp, #-0x10]!
0x19a958cf8 <+20>: mov x29, sp
Target 0: (python) stopped.
(lldb) bt
* thread #3, stop reason = signal SIGABRT
* frame #0: 0x000000019a958cec libsystem_kernel.dylib`__pthread_kill + 8
frame #1: 0x000000019a989c24 libsystem_pthread.dylib`pthread_kill + 292
frame #2: 0x000000019a8d1864 libsystem_c.dylib`abort + 104
frame #3: 0x000000011e9fc170 libfaiss.dylib`.omp_outlined..20 + 1668
frame #4: 0x0000000104b7b5bc libomp.dylib`__kmp_invoke_microtask + 156
frame #5: 0x0000000104b32464 libomp.dylib`__kmp_invoke_task_func + 328
frame #6: 0x0000000104b31838 libomp.dylib`__kmp_launch_thread + 400
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
(lldb)
Longer backtrace:
(lldb) thread backtrace all
thread #1, queue = 'com.apple.main-thread'
frame #0: 0x000000019a950df0 libsystem_kernel.dylib`swtch_pri + 8
frame #1: 0x000000019a986e38 libsystem_pthread.dylib`cthread_yield + 20
frame #2: 0x0000000104b4f114 libomp.dylib`kmp_flag_64::wait(kmp_info*, int, void*) + 872
frame #3: 0x0000000104b4d294 libomp.dylib`__kmp_hyper_barrier_gather(barrier_type, kmp_info*, int, int, void (*)(void*, void*), void*) + 812
frame #4: 0x0000000104b4a2b8 libomp.dylib`__kmp_barrier + 1104
frame #5: 0x0000000104b232d8 libomp.dylib`__kmpc_barrier + 316
frame #6: 0x000000011e9fbf80 libfaiss.dylib`.omp_outlined..20 + 1172
frame #7: 0x0000000104b7b5bc libomp.dylib`__kmp_invoke_microtask + 156
frame #8: 0x0000000104b32464 libomp.dylib`__kmp_invoke_task_func + 328
frame #9: 0x0000000104b2ead4 libomp.dylib`__kmp_fork_call + 6640
frame #10: 0x0000000104b22c34 libomp.dylib`__kmpc_fork_call + 220
frame #11: 0x000000011e9fb794 libfaiss.dylib`faiss::IndexIVF::range_search_preassigned(long long, float const*, float, long long const*, float const*, faiss::RangeSearchResult*, bool, faiss::IVFSearchParameters const*, faiss::IndexIVFStats*) const + 444
frame #12: 0x000000011e9fd4e0 libfaiss.dylib`faiss::IndexIVF::range_search(long long, float const*, float, faiss::RangeSearchResult*) const + 252
frame #13: 0x000000011e7ae83c _swigfaiss.so`_wrap_IndexIVF_range_search(_object*, _object*) + 508
frame #14: 0x000000010002e0a0 python`cfunction_call_varargs + 364
frame #15: 0x000000010002d6dc python`_PyObject_MakeTpCall + 640
frame #16: 0x0000000100142a1c python`call_function + 680
frame #17: 0x000000010013f4f8 python`_PyEval_EvalFrameDefault + 29472
frame #18: 0x000000010002e51c python`function_code_fastcall + 128
frame #19: 0x0000000100142984 python`call_function + 528
frame #20: 0x000000010013f4dc python`_PyEval_EvalFrameDefault + 29444
frame #21: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #22: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #23: 0x0000000100142984 python`call_function + 528
frame #24: 0x000000010013f4dc python`_PyEval_EvalFrameDefault + 29444
frame #25: 0x000000010002e51c python`function_code_fastcall + 128
frame #26: 0x0000000100142984 python`call_function + 528
frame #27: 0x000000010013f4dc python`_PyEval_EvalFrameDefault + 29444
frame #28: 0x000000010002e51c python`function_code_fastcall + 128
frame #29: 0x00000001000320a8 python`method_vectorcall + 156
frame #30: 0x0000000100142984 python`call_function + 528
frame #31: 0x000000010013f570 python`_PyEval_EvalFrameDefault + 29592
frame #32: 0x000000010002e51c python`function_code_fastcall + 128
frame #33: 0x0000000100142984 python`call_function + 528
frame #34: 0x000000010013f4dc python`_PyEval_EvalFrameDefault + 29444
frame #35: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #36: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #37: 0x0000000100032214 python`method_vectorcall + 520
frame #38: 0x000000010002dc4c python`PyVectorcall_Call + 120
frame #39: 0x000000010013f788 python`_PyEval_EvalFrameDefault + 30128
frame #40: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #41: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #42: 0x000000010002d310 python`_PyObject_FastCallDict + 132
frame #43: 0x000000010002f450 python`_PyObject_Call_Prepend + 156
frame #44: 0x00000001000a2d44 python`slot_tp_call + 296
frame #45: 0x000000010002d6dc python`_PyObject_MakeTpCall + 640
frame #46: 0x0000000100142a1c python`call_function + 680
frame #47: 0x000000010013f5e8 python`_PyEval_EvalFrameDefault + 29712
frame #48: 0x000000010002e51c python`function_code_fastcall + 128
frame #49: 0x0000000100142984 python`call_function + 528
frame #50: 0x000000010013f4dc python`_PyEval_EvalFrameDefault + 29444
frame #51: 0x000000010002e51c python`function_code_fastcall + 128
frame #52: 0x000000010002dc4c python`PyVectorcall_Call + 120
frame #53: 0x000000010013f788 python`_PyEval_EvalFrameDefault + 30128
frame #54: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #55: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #56: 0x0000000100142984 python`call_function + 528
frame #57: 0x000000010013f5e8 python`_PyEval_EvalFrameDefault + 29712
frame #58: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #59: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #60: 0x0000000100142984 python`call_function + 528
frame #61: 0x000000010013f4f8 python`_PyEval_EvalFrameDefault + 29472
frame #62: 0x000000010002e51c python`function_code_fastcall + 128
frame #63: 0x00000001000320a8 python`method_vectorcall + 156
frame #64: 0x0000000100142984 python`call_function + 528
frame #65: 0x000000010013f4f8 python`_PyEval_EvalFrameDefault + 29472
frame #66: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #67: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #68: 0x000000010002d310 python`_PyObject_FastCallDict + 132
frame #69: 0x000000010002f450 python`_PyObject_Call_Prepend + 156
frame #70: 0x00000001000a2d44 python`slot_tp_call + 296
frame #71: 0x000000010002de74 python`PyObject_Call + 312
frame #72: 0x000000010013f788 python`_PyEval_EvalFrameDefault + 30128
frame #73: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #74: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #75: 0x0000000100142984 python`call_function + 528
frame #76: 0x000000010013f570 python`_PyEval_EvalFrameDefault + 29592
frame #77: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #78: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #79: 0x00000001000320a8 python`method_vectorcall + 156
frame #80: 0x0000000100142984 python`call_function + 528
frame #81: 0x000000010013f5e8 python`_PyEval_EvalFrameDefault + 29712
frame #82: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #83: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #84: 0x000000010002dc4c python`PyVectorcall_Call + 120
frame #85: 0x000000010013f788 python`_PyEval_EvalFrameDefault + 30128
frame #86: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #87: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #88: 0x0000000100142984 python`call_function + 528
frame #89: 0x000000010013f570 python`_PyEval_EvalFrameDefault + 29592
frame #90: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #91: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #92: 0x0000000100142984 python`call_function + 528
frame #93: 0x000000010013f5e8 python`_PyEval_EvalFrameDefault + 29712
frame #94: 0x000000010002e51c python`function_code_fastcall + 128
frame #95: 0x000000010002dc4c python`PyVectorcall_Call + 120
frame #96: 0x000000010013f788 python`_PyEval_EvalFrameDefault + 30128
frame #97: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #98: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #99: 0x0000000100142984 python`call_function + 528
frame #100: 0x000000010013f5e8 python`_PyEval_EvalFrameDefault + 29712
frame #101: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #102: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #103: 0x0000000100142984 python`call_function + 528
frame #104: 0x000000010013f4f8 python`_PyEval_EvalFrameDefault + 29472
frame #105: 0x000000010002e51c python`function_code_fastcall + 128
frame #106: 0x00000001000320a8 python`method_vectorcall + 156
frame #107: 0x0000000100142984 python`call_function + 528
frame #108: 0x000000010013f4f8 python`_PyEval_EvalFrameDefault + 29472
frame #109: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #110: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #111: 0x000000010002d310 python`_PyObject_FastCallDict + 132
frame #112: 0x000000010002f450 python`_PyObject_Call_Prepend + 156
frame #113: 0x00000001000a2d44 python`slot_tp_call + 296
frame #114: 0x000000010002d6dc python`_PyObject_MakeTpCall + 640
frame #115: 0x0000000100142a1c python`call_function + 680
frame #116: 0x000000010013f5e8 python`_PyEval_EvalFrameDefault + 29712
frame #117: 0x000000010002e51c python`function_code_fastcall + 128
frame #118: 0x000000010002dc4c python`PyVectorcall_Call + 120
frame #119: 0x000000010013f788 python`_PyEval_EvalFrameDefault + 30128
frame #120: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #121: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #122: 0x0000000100142984 python`call_function + 528
frame #123: 0x000000010013f5e8 python`_PyEval_EvalFrameDefault + 29712
frame #124: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #125: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #126: 0x0000000100142984 python`call_function + 528
frame #127: 0x000000010013f4f8 python`_PyEval_EvalFrameDefault + 29472
frame #128: 0x000000010002e51c python`function_code_fastcall + 128
frame #129: 0x00000001000320a8 python`method_vectorcall + 156
frame #130: 0x0000000100142984 python`call_function + 528
frame #131: 0x000000010013f4f8 python`_PyEval_EvalFrameDefault + 29472
frame #132: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #133: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #134: 0x000000010002d310 python`_PyObject_FastCallDict + 132
frame #135: 0x000000010002f450 python`_PyObject_Call_Prepend + 156
frame #136: 0x00000001000a2d44 python`slot_tp_call + 296
frame #137: 0x000000010002d6dc python`_PyObject_MakeTpCall + 640
frame #138: 0x0000000100142a1c python`call_function + 680
frame #139: 0x000000010013f5e8 python`_PyEval_EvalFrameDefault + 29712
frame #140: 0x000000010002e51c python`function_code_fastcall + 128
frame #141: 0x0000000100142984 python`call_function + 528
frame #142: 0x000000010013f570 python`_PyEval_EvalFrameDefault + 29592
frame #143: 0x000000010002e51c python`function_code_fastcall + 128
frame #144: 0x0000000100142984 python`call_function + 528
frame #145: 0x000000010013f570 python`_PyEval_EvalFrameDefault + 29592
frame #146: 0x000000010002e51c python`function_code_fastcall + 128
frame #147: 0x000000010002dc4c python`PyVectorcall_Call + 120
frame #148: 0x000000010013f788 python`_PyEval_EvalFrameDefault + 30128
frame #149: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #150: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #151: 0x0000000100142984 python`call_function + 528
frame #152: 0x000000010013f5e8 python`_PyEval_EvalFrameDefault + 29712
frame #153: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #154: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #155: 0x0000000100142984 python`call_function + 528
frame #156: 0x000000010013f4f8 python`_PyEval_EvalFrameDefault + 29472
frame #157: 0x000000010002e51c python`function_code_fastcall + 128
frame #158: 0x00000001000320a8 python`method_vectorcall + 156
frame #159: 0x0000000100142984 python`call_function + 528
frame #160: 0x000000010013f4f8 python`_PyEval_EvalFrameDefault + 29472
frame #161: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #162: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #163: 0x000000010002d310 python`_PyObject_FastCallDict + 132
frame #164: 0x000000010002f450 python`_PyObject_Call_Prepend + 156
frame #165: 0x00000001000a2d44 python`slot_tp_call + 296
frame #166: 0x000000010002d6dc python`_PyObject_MakeTpCall + 640
frame #167: 0x0000000100142a1c python`call_function + 680
frame #168: 0x000000010013f5e8 python`_PyEval_EvalFrameDefault + 29712
frame #169: 0x000000010002e51c python`function_code_fastcall + 128
frame #170: 0x0000000100142984 python`call_function + 528
frame #171: 0x000000010013f570 python`_PyEval_EvalFrameDefault + 29592
frame #172: 0x000000010002e51c python`function_code_fastcall + 128
frame #173: 0x0000000100142984 python`call_function + 528
frame #174: 0x000000010013f4f8 python`_PyEval_EvalFrameDefault + 29472
frame #175: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #176: 0x0000000100132c8c python`builtin_exec + 1096
frame #177: 0x000000010007ff40 python`cfunction_vectorcall_FASTCALL + 284
frame #178: 0x0000000100142984 python`call_function + 528
frame #179: 0x000000010013f570 python`_PyEval_EvalFrameDefault + 29592
frame #180: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #181: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #182: 0x0000000100142984 python`call_function + 528
frame #183: 0x000000010013f570 python`_PyEval_EvalFrameDefault + 29592
frame #184: 0x0000000100137bc0 python`_PyEval_EvalCodeWithName + 3340
frame #185: 0x000000010002e69c python`_PyFunction_Vectorcall + 236
frame #186: 0x000000010002dc4c python`PyVectorcall_Call + 120
frame #187: 0x00000001001b9ca8 python`pymain_run_module + 224
frame #188: 0x00000001001b8dcc python`Py_RunMain + 1044
frame #189: 0x00000001001ba754 python`pymain_main + 1244
frame #190: 0x0000000100004980 python`main + 56
frame #191: 0x000000019a9a5f34 libdyld.dylib`start + 4
thread #2
frame #0: 0x000000011ea8c534 libfaiss.dylib`faiss::(anonymous namespace)::IVFSQScannerIP<faiss::(anonymous namespace)::DCTemplate<faiss::(anonymous namespace)::QuantizerTemplate<faiss::(anonymous namespace)::Codec8bit, false, 1>, faiss::(anonymous namespace)::SimilarityIP<1>, 1> >::scan_codes_range(unsigned long, unsigned char const*, long long const*, float, faiss::RangeQueryResult&) const + 184
frame #1: 0x000000011e9fc528 libfaiss.dylib`faiss::IndexIVF::range_search_preassigned(long long, float const*, float, long long const*, float const*, faiss::RangeSearchResult*, bool, faiss::IVFSearchParameters const*, faiss::IndexIVFStats*) const::$_4::operator()(unsigned long, unsigned long, faiss::RangeQueryResult&) const + 308
frame #2: 0x000000011e9fbe38 libfaiss.dylib`.omp_outlined..20 + 844
frame #3: 0x0000000104b7b5bc libomp.dylib`__kmp_invoke_microtask + 156
frame #4: 0x0000000104b32464 libomp.dylib`__kmp_invoke_task_func + 328
frame #5: 0x0000000104b31838 libomp.dylib`__kmp_launch_thread + 400
frame #6: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #7: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
* thread #3, stop reason = signal SIGABRT
* frame #0: 0x000000019a958cec libsystem_kernel.dylib`__pthread_kill + 8
frame #1: 0x000000019a989c24 libsystem_pthread.dylib`pthread_kill + 292
frame #2: 0x000000019a8d1864 libsystem_c.dylib`abort + 104
frame #3: 0x000000011e9fc170 libfaiss.dylib`.omp_outlined..20 + 1668
frame #4: 0x0000000104b7b5bc libomp.dylib`__kmp_invoke_microtask + 156
frame #5: 0x0000000104b32464 libomp.dylib`__kmp_invoke_task_func + 328
frame #6: 0x0000000104b31838 libomp.dylib`__kmp_launch_thread + 400
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
thread #4
frame #0: 0x000000011ea8c4d4 libfaiss.dylib`faiss::(anonymous namespace)::IVFSQScannerIP<faiss::(anonymous namespace)::DCTemplate<faiss::(anonymous namespace)::QuantizerTemplate<faiss::(anonymous namespace)::Codec8bit, false, 1>, faiss::(anonymous namespace)::SimilarityIP<1>, 1> >::scan_codes_range(unsigned long, unsigned char const*, long long const*, float, faiss::RangeQueryResult&) const + 88
frame #1: 0x000000011e9fc528 libfaiss.dylib`faiss::IndexIVF::range_search_preassigned(long long, float const*, float, long long const*, float const*, faiss::RangeSearchResult*, bool, faiss::IVFSearchParameters const*, faiss::IndexIVFStats*) const::$_4::operator()(unsigned long, unsigned long, faiss::RangeQueryResult&) const + 308
frame #2: 0x000000011e9fbe38 libfaiss.dylib`.omp_outlined..20 + 844
frame #3: 0x0000000104b7b5bc libomp.dylib`__kmp_invoke_microtask + 156
frame #4: 0x0000000104b32464 libomp.dylib`__kmp_invoke_task_func + 328
frame #5: 0x0000000104b31838 libomp.dylib`__kmp_launch_thread + 400
frame #6: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #7: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
thread #7
frame #0: 0x000000019a9529c4 libsystem_kernel.dylib`__workq_kernreturn + 8
thread #12
frame #0: 0x000000019a9529c4 libsystem_kernel.dylib`__workq_kernreturn + 8
thread #13
frame #0: 0x000000019a9529c4 libsystem_kernel.dylib`__workq_kernreturn + 8
thread #14
frame #0: 0x000000019a9529c4 libsystem_kernel.dylib`__workq_kernreturn + 8
thread #15
frame #0: 0x000000019a9529c4 libsystem_kernel.dylib`__workq_kernreturn + 8
thread #16
frame #0: 0x000000019a9529c4 libsystem_kernel.dylib`__workq_kernreturn + 8
thread #17
frame #0: 0x000000019a9529c4 libsystem_kernel.dylib`__workq_kernreturn + 8
thread #18
frame #0: 0x000000019a9529c4 libsystem_kernel.dylib`__workq_kernreturn + 8
thread #19
frame #0: 0x000000019a954488 libsystem_kernel.dylib`__psynch_cvwait + 8
frame #1: 0x000000019a98a568 libsystem_pthread.dylib`_pthread_cond_wait + 1192
frame #2: 0x0000000104b65530 libomp.dylib`__kmp_suspend_64 + 352
frame #3: 0x0000000104b4f558 libomp.dylib`kmp_flag_64::wait(kmp_info*, int, void*) + 1964
frame #4: 0x0000000104b4b7d4 libomp.dylib`__kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 168
frame #5: 0x0000000104b4ea6c libomp.dylib`__kmp_fork_barrier(int, int) + 496
frame #6: 0x0000000104b317e4 libomp.dylib`__kmp_launch_thread + 316
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
thread #20
frame #0: 0x000000019a954488 libsystem_kernel.dylib`__psynch_cvwait + 8
frame #1: 0x000000019a98a568 libsystem_pthread.dylib`_pthread_cond_wait + 1192
frame #2: 0x0000000104b65530 libomp.dylib`__kmp_suspend_64 + 352
frame #3: 0x0000000104b4f558 libomp.dylib`kmp_flag_64::wait(kmp_info*, int, void*) + 1964
frame #4: 0x0000000104b4b7d4 libomp.dylib`__kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 168
frame #5: 0x0000000104b4ea6c libomp.dylib`__kmp_fork_barrier(int, int) + 496
frame #6: 0x0000000104b317e4 libomp.dylib`__kmp_launch_thread + 316
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
thread #21
frame #0: 0x000000019a954488 libsystem_kernel.dylib`__psynch_cvwait + 8
frame #1: 0x000000019a98a568 libsystem_pthread.dylib`_pthread_cond_wait + 1192
frame #2: 0x0000000104b65530 libomp.dylib`__kmp_suspend_64 + 352
frame #3: 0x0000000104b4f558 libomp.dylib`kmp_flag_64::wait(kmp_info*, int, void*) + 1964
frame #4: 0x0000000104b4b7d4 libomp.dylib`__kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 168
frame #5: 0x0000000104b4ea6c libomp.dylib`__kmp_fork_barrier(int, int) + 496
frame #6: 0x0000000104b317e4 libomp.dylib`__kmp_launch_thread + 316
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
thread #22
frame #0: 0x000000019a954488 libsystem_kernel.dylib`__psynch_cvwait + 8
frame #1: 0x000000019a98a568 libsystem_pthread.dylib`_pthread_cond_wait + 1192
frame #2: 0x0000000104b65530 libomp.dylib`__kmp_suspend_64 + 352
frame #3: 0x0000000104b4f558 libomp.dylib`kmp_flag_64::wait(kmp_info*, int, void*) + 1964
frame #4: 0x0000000104b4b7d4 libomp.dylib`__kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 168
frame #5: 0x0000000104b4ea6c libomp.dylib`__kmp_fork_barrier(int, int) + 496
frame #6: 0x0000000104b317e4 libomp.dylib`__kmp_launch_thread + 316
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
thread #23
frame #0: 0x000000019a954488 libsystem_kernel.dylib`__psynch_cvwait + 8
frame #1: 0x000000019a98a568 libsystem_pthread.dylib`_pthread_cond_wait + 1192
frame #2: 0x0000000104b65530 libomp.dylib`__kmp_suspend_64 + 352
frame #3: 0x0000000104b4f558 libomp.dylib`kmp_flag_64::wait(kmp_info*, int, void*) + 1964
frame #4: 0x0000000104b4b7d4 libomp.dylib`__kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 168
frame #5: 0x0000000104b4ea6c libomp.dylib`__kmp_fork_barrier(int, int) + 496
frame #6: 0x0000000104b317e4 libomp.dylib`__kmp_launch_thread + 316
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
thread #24
frame #0: 0x000000019a954488 libsystem_kernel.dylib`__psynch_cvwait + 8
frame #1: 0x000000019a98a568 libsystem_pthread.dylib`_pthread_cond_wait + 1192
frame #2: 0x0000000104b65530 libomp.dylib`__kmp_suspend_64 + 352
frame #3: 0x0000000104b4f558 libomp.dylib`kmp_flag_64::wait(kmp_info*, int, void*) + 1964
frame #4: 0x0000000104b4b7d4 libomp.dylib`__kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 168
frame #5: 0x0000000104b4ea6c libomp.dylib`__kmp_fork_barrier(int, int) + 496
frame #6: 0x0000000104b317e4 libomp.dylib`__kmp_launch_thread + 316
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
thread #25
frame #0: 0x000000019a954488 libsystem_kernel.dylib`__psynch_cvwait + 8
frame #1: 0x000000019a98a568 libsystem_pthread.dylib`_pthread_cond_wait + 1192
frame #2: 0x0000000104b65530 libomp.dylib`__kmp_suspend_64 + 352
frame #3: 0x0000000104b4f558 libomp.dylib`kmp_flag_64::wait(kmp_info*, int, void*) + 1964
frame #4: 0x0000000104b4b7d4 libomp.dylib`__kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 168
frame #5: 0x0000000104b4ea6c libomp.dylib`__kmp_fork_barrier(int, int) + 496
frame #6: 0x0000000104b317e4 libomp.dylib`__kmp_launch_thread + 316
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
thread #26
frame #0: 0x000000019a954488 libsystem_kernel.dylib`__psynch_cvwait + 8
frame #1: 0x000000019a98a568 libsystem_pthread.dylib`_pthread_cond_wait + 1192
frame #2: 0x0000000104b65530 libomp.dylib`__kmp_suspend_64 + 352
frame #3: 0x0000000104b4f558 libomp.dylib`kmp_flag_64::wait(kmp_info*, int, void*) + 1964
frame #4: 0x0000000104b4b7d4 libomp.dylib`__kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 168
frame #5: 0x0000000104b4ea6c libomp.dylib`__kmp_fork_barrier(int, int) + 496
frame #6: 0x0000000104b317e4 libomp.dylib`__kmp_launch_thread + 316
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
thread #27
frame #0: 0x000000019a954488 libsystem_kernel.dylib`__psynch_cvwait + 8
frame #1: 0x000000019a98a568 libsystem_pthread.dylib`_pthread_cond_wait + 1192
frame #2: 0x0000000104b65530 libomp.dylib`__kmp_suspend_64 + 352
frame #3: 0x0000000104b4f558 libomp.dylib`kmp_flag_64::wait(kmp_info*, int, void*) + 1964
frame #4: 0x0000000104b4b7d4 libomp.dylib`__kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 168
frame #5: 0x0000000104b4ea6c libomp.dylib`__kmp_fork_barrier(int, int) + 496
frame #6: 0x0000000104b317e4 libomp.dylib`__kmp_launch_thread + 316
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
thread #28
frame #0: 0x000000019a954488 libsystem_kernel.dylib`__psynch_cvwait + 8
frame #1: 0x000000019a98a568 libsystem_pthread.dylib`_pthread_cond_wait + 1192
frame #2: 0x0000000104b65530 libomp.dylib`__kmp_suspend_64 + 352
frame #3: 0x0000000104b4f558 libomp.dylib`kmp_flag_64::wait(kmp_info*, int, void*) + 1964
frame #4: 0x0000000104b4b7d4 libomp.dylib`__kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 168
frame #5: 0x0000000104b4ea6c libomp.dylib`__kmp_fork_barrier(int, int) + 496
frame #6: 0x0000000104b317e4 libomp.dylib`__kmp_launch_thread + 316
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
thread #29
frame #0: 0x000000019a954488 libsystem_kernel.dylib`__psynch_cvwait + 8
frame #1: 0x000000019a98a568 libsystem_pthread.dylib`_pthread_cond_wait + 1192
frame #2: 0x0000000104b65530 libomp.dylib`__kmp_suspend_64 + 352
frame #3: 0x0000000104b4f558 libomp.dylib`kmp_flag_64::wait(kmp_info*, int, void*) + 1964
frame #4: 0x0000000104b4b7d4 libomp.dylib`__kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 168
frame #5: 0x0000000104b4ea6c libomp.dylib`__kmp_fork_barrier(int, int) + 496
frame #6: 0x0000000104b317e4 libomp.dylib`__kmp_launch_thread + 316
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
thread #30
frame #0: 0x000000019a954488 libsystem_kernel.dylib`__psynch_cvwait + 8
frame #1: 0x000000019a98a568 libsystem_pthread.dylib`_pthread_cond_wait + 1192
frame #2: 0x0000000104b65530 libomp.dylib`__kmp_suspend_64 + 352
frame #3: 0x0000000104b4f558 libomp.dylib`kmp_flag_64::wait(kmp_info*, int, void*) + 1964
frame #4: 0x0000000104b4b7d4 libomp.dylib`__kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*) + 168
frame #5: 0x0000000104b4ea6c libomp.dylib`__kmp_fork_barrier(int, int) + 496
frame #6: 0x0000000104b317e4 libomp.dylib`__kmp_launch_thread + 316
frame #7: 0x0000000104b63ec4 libomp.dylib`__kmp_launch_worker(void*) + 280
frame #8: 0x000000019a98a06c libsystem_pthread.dylib`_pthread_start + 320
OK yeah, that's the same failure as https://github.com/facebookresearch/faiss/issues/1696. Thanks a lot for your help!
Interestingly, this now passed the test suite (on osx-64). 🥳 @xhochy, would you be so kind to run the test suite for osx-arm again? Artefacts are here.
Ping @xhochy if you could run the test suite on an osx-arm again that would be awesome :)
@conda-forge/help-osx-arm64
I'd really like to test the artefacts before merging, but I don't have an M1. Could someone download the artefact & run conda build --test <path/to/unpacked/folder>/osx-arm64/faiss-1.7.1-py38h<hash>_cpu.tar
? Otherwise, I'll just merge this in a week or so and hope for the best.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found it was in an excellent condition.
I do have some suggestions for making it better though...
For recipe:
py
. For example: # [py>=36]
. See lines [94, 177, 241, 251]Artifacts were already deleted :( I restarted the build to get new ones.
This sadly has issues with package signing:
export PREFIX=/Users/uwe/mambaforge/conda-bld/faiss_1635405046185/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_p
export SRC_DIR=/Users/uwe/mambaforge/conda-bld/faiss_1635405046185/test_tmp
/Users/uwe/mambaforge/conda-bld/faiss_1635405046185/test_tmp/conda_test_runner.sh: line 3: 8558 Killed: 9 "/Users/uwe/mambaforge/conda-bld/faiss_1635405046185/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_p/bin/python" -s "/Users/uwe/mambaforge/conda-bld/faiss_1635405046185/test_tmp/run_test.py"
Leaving build/test directories:
Work:
/Users/uwe/mambaforge/conda-bld/work
Test:
/Users/uwe/mambaforge/conda-bld/test_tmp
Leaving build/test environments:
Test:
source activate /Users/uwe/mambaforge/conda-bld/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_p
Build:
source activate /Users/uwe/mambaforge/conda-bld/_build_env
TESTS FAILED: faiss-1.7.1-py39h29014a9_0_cpu.tar.bz2
From the kernel log:
2021-10-28 09:11:07.715609+0200 0x147bcb4 Default 0x0 0 0 kernel: CODE SIGNING: cs_invalid_page(0x104cfc000): p=8558[python3.9] final status 0x23000200, denying page sending SIGKILL
2021-10-28 09:11:07.715618+0200 0x147bcb4 Default 0x0 0 0 kernel: CODE SIGNING: process 8558[python3.9]: rejecting invalid page at address 0x104cfc000 from offset 0x0 in file "/Users/uwe/mambaforge/conda-bld/faiss_1635405046185/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_p/lib/python3.9/site-packages/faiss/_swigfaiss.so" (cs_mtime:1635398350.0 == mtime:1635398350.0) (signed:1 validated:1 tainted:1 nx:0 wpmapped:0 dirty:0 depth:0)
2021-10-28 09:11:07.715663+0200 0x147bcb4 Default 0x0 0 0 kernel: python3.9[8558] Corpse allowed 1 of 5
The following line from the build log could be a culprit, at least this might trigger a code patch in conda-build's post-processing that may invalidate the signature.
2021-10-28T05:19:16.6060700Z WARNING (faiss,lib/python3.9/site-packages/faiss/_swigfaiss.so): /Users/runner/miniforge3/conda-bld/faiss-split_1635397576720/work/_libfaiss_generic_stage/lib/libfaiss.dylib not found in packages, sysroot(s) nor the missing_dso_whitelist.
2021-10-28T05:19:16.6062510Z .. is this binary repackaging?
Thanks for testing! :)
This sadly has issues with package signing:
I'm quite surprised about this, because basically nothing has changed about the build script since the last time you tested it (where it did run through the test suite with one failed test). The version did get bumped in the meantime, but that's about it...
Looking a bit closer, the OSX-x64 build has exactly the same warning:
WARNING (faiss,lib/python3.9/site-packages/faiss/_swigfaiss.so): /Users/runner/miniforge3/conda-bld/faiss-split_1635397590038/work/_libfaiss_generic_stage/lib/libfaiss.dylib not found in packages, sysroot(s) nor the missing_dso_whitelist.
.. is this binary repackaging?
but apparently, but there we're definitely running through the test suite. successfully...
osx-64
isn't checking any code signatures, only on osx-arm64
they are enforced by the OS.
osx-64
isn't checking any code signatures, only onosx-arm64
they are enforced by the OS.
OK, thanks.
Could it also be the following?
The install/build script(s) for faiss deleted the following files (from dependencies) from the prefix:
['lib/python3.9/site-packages/setuptools/command/__pycache__/build_py.cpython-39.pyc', ...
..., 'lib/python3.9/site-packages/numpy/doc/__pycache__/dispatch.cpython-39.pyc']
This will cause the post-link checks to mis-report. Please try not to delete and files (DSOs in particular) from the prefix
Not sure where this is coming from (setuptools? numpy?), because I'm not deleting anything there.
But my suspicion is now that something is going wrong with the temporary install paths. On Linux:
INFO (faiss,lib/python3.7/site-packages/faiss/_swigfaiss.so): Needed DSO lib/libfaiss.so found in home/conda/feedstock_root/build_artifacts::libfaiss-1.7.1-hb573701_0_cpu
On OSX (note _libfaiss_generic_stage
)
WARNING (faiss,lib/python3.9/site-packages/faiss/_swigfaiss.so): /Users/runner/miniforge3/conda-bld/faiss-split_1635397576720/work/_libfaiss_generic_stage/lib/libfaiss.dylib not found in packages, sysroot(s) nor the missing_dso_whitelist.
This is specified in build-pkg.sh
in only one place
# Build vanilla version (no avx2), see build-lib.sh
cmake -B _build_python_generic \
-Dfaiss_ROOT=_libfaiss_generic_stage/ \
[...]
so -Dfaiss_ROOT
seems to have different effects on linux/osx
The upstream CMakeLists
(actually all the repo) does not mention faiss_ROOT
. Is this a CMake-constructed variable based on project+"_ROOT"?
@conda-forge-admin, please rerender
The upstream
CMakeLists
(actually all the repo) does not mentionfaiss_ROOT
. Is this a CMake-constructed variable based on project+"_ROOT"?
foo_ROOT
is a CMake magic variable that will be used in searching for the library when using find_package(foo)
.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found it was in an excellent condition.
@conda-forge/core
I'm baffled why the job for osx-arm here only builds for one python version
anaconda upload \
/Users/runner/miniforge3/conda-bld/osx-arm64/libfaiss-1.7.2-h1234567_0_cpu.tar.bz2 \
/Users/runner/miniforge3/conda-bld/osx-arm64/faiss-1.7.2-py38hc77d0a9_0_cpu.tar.bz2
Compare linux:
anaconda upload \
/home/conda/feedstock_root/build_artifacts/linux-64/libfaiss-1.7.2-cuda102h40fed6a_0_cuda.tar.bz2 \
/home/conda/feedstock_root/build_artifacts/linux-64/faiss-1.7.2-py310cuda102h8ca44b4_0_cuda.tar.bz2 \
/home/conda/feedstock_root/build_artifacts/linux-64/faiss-1.7.2-py38cuda102he6546ce_0_cuda.tar.bz2 \
/home/conda/feedstock_root/build_artifacts/linux-64/faiss-1.7.2-py37cuda102hc2a357d_0_cuda.tar.bz2 \
/home/conda/feedstock_root/build_artifacts/linux-64/faiss-1.7.2-py39cuda102hff432de_0_cuda.tar.bz2
I even tried to force this with https://github.com/conda-forge/faiss-split-feedstock/pull/39/commits/d460fb2baf8f3b94bd7cc7722c52d54aec52560d, but no cigar.
@conda-forge-admin, please rerender
@conda-forge/help-osx-arm64 This PR is finally close to finishing after languishing for +/- 1.5 years with some weird bugs, but now I have the problem that osx-arm apparently builds for only one python version rather than 3.{8,9,10}. In the previous run it chose 3.8, now it's 3.10 (see also comment above):
BUILD START: ['faiss-proc-1.0.0-cpu.tar.bz2', 'libfaiss-1.7.2-h1234567_1_cpu.tar.bz2', 'faiss-1.7.2-py310hc77d0a9_1_cpu.tar.bz2', 'faiss-cpu-1.7.2-h15cdf91_1.tar.bz2']
Any idea/tips how to fix this?
I think recipe generation is problematic because you have a top level requirements sections without a host section
I think recipe generation is problematic because you have a top level requirements sections without a host section
Thanks for your input!
I'm wondering, wouldn't adding python there most likely split up all the builds (of currently libfaiss + faiss * python) into builds per python version?
Also, I'm confused what makes osx-arm special here - it worked for everything else so far... 🤔
@conda-forge-admin, please rerender
Hi! This is the friendly automated conda-forge-webservice.
I tried to rerender for you, but it looks like there was nothing to do.
This message was generated by GitHub actions workflow run https://github.com/conda-forge/faiss-split-feedstock/actions/runs/3071588717.
@hmaarrfk This ended up not changing anything, osx-arm is still only building 3.10...
Yeah .. I'm not sure. I don't like these top level build recipes
@conda-forge-admin, please re-render
Hi! This is the friendly automated conda-forge-linting service.
I was trying to look for recipes to lint for you, but it appears we have a merge conflict. Please try to merge or rebase with the base branch to resolve this conflict.
Please ping the 'conda-forge/core' team (using the @ notation in a comment) if you believe this is a bug.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found it was in an excellent condition.
Took the liberty of fixing conflicts and refreshing. Some of the changes from PR ( https://github.com/conda-forge/faiss-split-feedstock/pull/58 ) work just as well here. So should simplify things a bit
Looks like the macOS ARM build passed. Not sure if there is anything else to check for here. Other CI jobs will probably take a while to complete (though are not directly affected by the changes here)
Looks like the macOS ARM build passed. Not sure if there is anything else to check for here. Other CI jobs will probably take a while to complete (though are not directly affected by the changes here)
Please the check the history of this PR. We need to check the artefacts (they used to segfault), and last I checked it was only building for one python version on osx-arm.
@conda-forge-admin please rerender
I downloaded the artefacts in #66 and ran the test on my osx-arm64 machine. There are no segfaults anymore, and just one failing test:
============================================================================================================================================ FAILURES ============================================================================================================================================
__________________________________________________________________________________________________________________________________ TestPreassigned.test_binary ___________________________________________________________________________________________________________________________________
self = <test_contrib.TestPreassigned testMethod=test_binary>
def test_binary(self):
ds = datasets.SyntheticDataset(128, 2000, 2000, 200)
d = ds.d
xt = ds.get_train()
xq = ds.get_queries()
xb = ds.get_database()
# define alternative quantizer on the 20 first dims of vectors (will be in float)
km = faiss.Kmeans(20, 50)
km.train(xt[:, :20].copy())
alt_quantizer = km.index
binarizer = faiss.index_factory(d, "ITQ,LSHt")
binarizer.train(xt)
xb_bin = binarizer.sa_encode(xb)
xq_bin = binarizer.sa_encode(xq)
index = faiss.index_binary_factory(d, "BIVF200")
fake_centroids = np.zeros((index.nlist, index.d // 8), dtype="uint8")
index.quantizer.add(fake_centroids)
index.is_trained = True
# add elements xb
a = alt_quantizer.search(xb[:, :20].copy(), 1)[1].ravel()
ivf_tools.add_preassigned(index, xb_bin, a)
# search elements xq, increase nprobe, check 4 first results w/ groundtruth
prev_inter_perf = 0
for nprobe in 1, 10, 20:
index.nprobe = nprobe
a = alt_quantizer.search(xq[:, :20].copy(), index.nprobe)[1]
D, I = ivf_tools.search_preassigned(index, xq_bin, 4, a)
inter_perf = (I == ds.get_groundtruth()[:, :4]).sum() / I.size
> self.assertTrue(inter_perf >= prev_inter_perf)
E AssertionError: False is not true
faiss/tests/test_contrib.py:373: AssertionError
@h-vetinari @hmaarrfk would you be happy to merge if I get the osx-arm64 build to work for all Python versions? Or are you concerned about this test failure?
I downloaded the artefacts in #66 and ran the test on my osx-arm64 machine. There are no segfaults anymore, and just one failing test:
That's amazing, thanks a lot! ❤️
I had run out of steam on this on...
I think we can live with one test failure (skipped for now), though it would be nice to then also build faiss 1.7.4, check if the error is still there, and if so, raise an issue upstream.
It seems that a newer CMake version (?) is breaking something about our CUDA detection on windows:
CMake Warning at D:/bld/faiss-split_1684901992441/_build_env/Library/share/cmake-3.26/Modules/CMakeDetermineCUDACompiler.cmake:15 (message):
Visual Studio does not support specifying CUDAHOSTCXX or
CMAKE_CUDA_HOST_COMPILER. Using the C++ compiler provided by Visual
Studio.
Call Stack (most recent call first):
CMakeLists.txt:28 (enable_language)
CMake Error at D:/bld/faiss-split_1684901992441/_build_env/Library/share/cmake-3.26/Modules/CMakeDetermineCompilerId.cmake:501 (message):
No CUDA toolset found.
Since it's not the main goal of this PR, I guess we could also not care about the windows builds, but I'll try capping CMake at least. We should also switch the CUDA-on-aarch builds to cross-compilation now that that's available. Lots to do here... 😅
This ended up not changing anything, osx-arm is still only building 3.10...
Judging from the CI in #66, we still have to deal with this problem though. This is currently only building for one python version (only on osx-arm)
Yes correct, we need to fix the single Python build issue; any hints would be appreciated
Yes correct, we need to fix the single Python build issue; any hints would be appreciated
Likewise! Honestly, it looks like a conda-build bug to me, but then, this recipe does a few weird things (like looping over the output names to get the avx2 variants; I'd very much like to do this with archspec, but alas, that effort has completely stalled 😢), so it's possible that we're running into some weird edge cases.
See #66 - a few additional build dependencies on python will do the job. I’m going on leave from tomorrow evening for quite a while, please go ahead and integrate the small changes here if you have the cycles.
Edit: seems like a single additional dep is all we need.
Builds fail with AutoTune.h not found - is that due to the change to ninja? Don’t see why this would matter ..
This feedstock is being rebuilt as part of the ARM OSX migration.
Feel free to merge the PR if CI is all green, but please don't close it without reaching out the the ARM OSX team first at @conda-forge/help-osx-arm64.
Closes #33
If this PR was opened in error or needs to be updated please add the
bot-rerun
label to this PR. The bot will close this PR and schedule another one. If you do not have permissions to add this label, you can use the phrase code>@<space/conda-forge-admin, please rerun bot in a PR comment to have theconda-forge-admin
add it for you.This PR was created by the regro-cf-autotick-bot. The regro-cf-autotick-bot is a service to automatically track the dependency graph, migrate packages, and propose package version updates for conda-forge. If you would like a local version of this bot, you might consider using rever. Rever is a tool for automating software releases and forms the backbone of the bot's conda-forge PRing capability. Rever is both conda (
conda install -c conda-forge rever
) and pip (pip install re-ver
) installable. Finally, feel free to drop us a line if there are any issues! This PR was generated by https://github.com/regro/autotick-bot/actions/runs/680587027, please use this URL for debugging