Closed pgagarinov closed 1 year ago
The reason you got this error because you are using the TVM Unity pre-built wheel for the CPU, you should install the CUDA 11.8 version instead:
pip install --pre mlc-ai-nightly-cu118 -f https://mlc.ai/wheels
I also noticed that you are using python 3.11 and we haven't provided pre-built CUDA wheels for python 3.11, we are fixing this in https://github.com/mlc-ai/package/pull/19, and you are expected to see this coming soon.
I encounter a similar error . LLVM ERRORοΌ when run : python3 ./build.py --hf-path databricks/dolly-v2-3b --target cuda Using path "dist/models/dolly-v2-3b" for model "dolly-v2-3b" Database paths: ['log_db/rwkv-raven-1b5', 'log_db/vicuna-v1-7b', 'log_db/redpajama-3b-q4f16', 'log_db/rwkv-raven-7b', 'log_db/redpajama-3b-q4f32', 'log_db/rwkv-raven-3b', 'log_db/dolly-v2-3b'] Target configured: cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32 LLVM ERROR:
with env: SE_GTEST: AUTO SUMMARIZE: OFF USE_IOS_RPC: OFF CUDA_VERSION: 11.2 USE_LIBBACKTRACE: AUTO DLPACK_PATH: 3rdparty/dlpack/include USE_TENSORRT_CODEGEN: OFF USE_THRUST: OFF USE_TARGET_ONNX: OFF USE_AOT_EXECUTOR: ON BUILD_DUMMY_LIBTVM: OFF USE_CUDNN: ON USE_TENSORRT_RUNTIME: OFF USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF USE_CCACHE: AUTO USE_ARM_COMPUTE_LIB: OFF USE_CPP_RTVM: OFF USE_OPENCL_GTEST: /path/to/opencl/gtest USE_MKL: OFF USE_PT_TVMDSOOP: OFF USE_CLML: OFF USE_STACKVM_RUNTIME: OFF USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF ROCM_PATH: /opt/rocm USE_DNNL: OFF USE_VITIS_AI: OFF USE_LLVM: ON USE_VERILATOR: OFF USE_TF_TVMDSOOP: OFF USE_THREADS: ON USE_MSVC_MT: OFF BACKTRACE_ON_SEGFAULT: OFF USE_GRAPH_EXECUTOR: ON USE_ROCBLAS: OFF GIT_COMMIT_HASH: 6fd55bcfecc7abcc707339d7a8ba493f0048b613 USE_VULKAN: OFF USE_RUST_EXT: OFF USE_CUTLASS: OFF USE_CPP_RPC: OFF USE_HEXAGON: OFF USE_CUSTOM_LOGGING: OFF USE_UMA: OFF USE_FALLBACK_STL_MAP: OFF USE_SORT: ON USE_RTTI: ON GIT_COMMIT_TIME: 2023-06-05 12:18:09 -0700 USE_HEXAGON_SDK: /path/to/sdk USE_BLAS: none USE_ETHOSN: OFF USE_LIBTORCH: OFF USE_RANDOM: ON USE_CUDA: ON USE_COREML: OFF USE_AMX: OFF BUILD_STATIC_RUNTIME: OFF USE_CMSISNN: OFF USE_KHRONOS_SPIRV: OFF USE_CLML_GRAPH_EXECUTOR: OFF USE_TFLITE: OFF USE_HEXAGON_GTEST: /path/to/hexagon/gtest PICOJSON_PATH: 3rdparty/picojson USE_OPENCL_ENABLE_HOST_PTR: OFF INSTALL_DEV: OFF USE_PROFILER: ON USE_NNPACK: OFF LLVM_VERSION: 6.0.0 USE_OPENCL: OFF COMPILER_RT_PATH: 3rdparty/compiler-rt RANG_PATH: 3rdparty/rang/include USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF USE_OPENMP: none USE_BNNS: OFF USE_CUBLAS: ON USE_METAL: OFF USE_MICRO_STANDALONE_RUNTIME: OFF USE_HEXAGON_EXTERNAL_LIBS: OFF USE_ALTERNATIVE_LINKER: AUTO USE_BYODT_POSIT: OFF USE_HEXAGON_RPC: OFF USE_MICRO: OFF DMLC_PATH: 3rdparty/dmlc-core/include INDEX_DEFAULT_I64: ON USE_RELAY_DEBUG: OFF USE_RPC: ON USE_TENSORFLOW_PATH: none TVM_CLML_VERSION: USE_MIOPEN: OFF USE_ROCM: OFF USE_PAPI: OFF USE_CURAND: OFF TVM_CXX_COMPILER_PATH: /usr/bin/c++ HIDE_PRIVATE_SYMBOLS: OFF
Hi @pgagarinov , we have updated the wheels and now mlc-ai-nightly-cu118
for python 3.11 is available, please try uninstalling and mlc-ai-nightly
and install mlc-ai-nightly-cu118
instead. That should solve the issue here:
pip uninstall mlc-ai-nightly
pip install --pre mlc-ai-nightly-cu118 -f https://mlc.ai/wheels
Hi @yongbing would you mind creating another issue and elaborate on the LLVM error there? These two errors do not look similar.
Hi @yongbing would you mind creating another issue and elaborate on the LLVM error there? These two errors do not look similar.
ok. alreay create a new issue https://github.com/mlc-ai/mlc-llm/issues/356 and ref yo. Tnx.
Hi @yongbing would you mind creating another issue and elaborate on the LLVM error there? These two errors do not look similar.
I believe this issue will be gone following Zihao's suggestion: https://github.com/mlc-ai/mlc-llm/issues/339#issuecomment-1579587863. Please feel free to create a new one if it persists.
π Bug
To Reproduce
Steps to reproduce the behavior:
Expected behavior
I expect the compilation to succeed.
Environment
python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models):Additional context