mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.2k stars 1.58k forks source link

[Bug] compilation for CUDA fails on Linux with CUDA 11.8 #339

Closed pgagarinov closed 1 year ago

pgagarinov commented 1 year ago

πŸ› Bug

❯ python build.py --hf=eachadea/vicuna-7b-1.1 --target cuda
Weights exist at dist/models/vicuna-7b-1.1, skipping download.
Using path "dist/models/vicuna-7b-1.1" for model "vicuna-7b-1.1"
Database paths: ['log_db/vicuna-v1-7b', 'log_db/rwkv-raven-1b5', 'log_db/redpajama-3b-q4f16', 'log_db/dolly-v2-3b', 'log_db/redpajama-3b-q4f32', 'log_db/rwkv-raven-7b', 'log_db/rwkv-raven-3b']
[23:33:00] /workspace/tvm/src/target/target_kind.cc:163: Warning: Unable to detect CUDA version, default to "-arch=sm_50" instead
Target configured: cuda -keys=cuda,gpu -arch=sm_50 -max_num_threads=1024 -thread_warp_size=32
Failed to detect local GPU, falling back to CPU as a target
Automatically using target for weight quantization: llvm -keys=cpu
Start computing and quantizing weights... This may take a while.
Finish computing and quantizing weights.
Total param size: 2.8401594161987305 GB
Start storing to cache dist/vicuna-7b-1.1-q3f16_0/params
[0519/0519] saving param_518
All finished, 130 total shards committed, record saved to dist/vicuna-7b-1.1-q3f16_0/params/ndarray-cache.json
Save a cached module to dist/vicuna-7b-1.1-q3f16_0/mod_cache_before_build_cuda.pkl.
Dump static shape TIR to dist/vicuna-7b-1.1-q3f16_0/debug/mod_tir_static.py
Dump dynamic shape TIR to dist/vicuna-7b-1.1-q3f16_0/debug/mod_tir_dynamic.py
- Dispatch to pre-scheduled op: fused_NT_matmul2_divide1_maximum1_minimum1_cast3
- Dispatch to pre-scheduled op: fused_NT_matmul3_multiply1
- Dispatch to pre-scheduled op: fused_softmax_cast1
- Dispatch to pre-scheduled op: decode2
- Dispatch to pre-scheduled op: fused_NT_matmul4_add1
- Dispatch to pre-scheduled op: fused_decode5_fused_matmul_add
- Dispatch to pre-scheduled op: rms_norm
- Dispatch to pre-scheduled op: fused_decode6_fused_matmul2_silu
- Dispatch to pre-scheduled op: fused_decode7_fused_matmul3_add
- Dispatch to pre-scheduled op: fused_decode5_matmul
- Dispatch to pre-scheduled op: decode1
- Dispatch to pre-scheduled op: fused_min_max_triu_te_broadcast_to
- Dispatch to pre-scheduled op: NT_matmul1
- Dispatch to pre-scheduled op: matmul1
- Dispatch to pre-scheduled op: fused_decode6_fused_matmul2_multiply
- Dispatch to pre-scheduled op: fused_NT_matmul1_add1
- Dispatch to pre-scheduled op: fused_decode4_fused_matmul4_cast2
- Dispatch to pre-scheduled op: matmul6
- Dispatch to pre-scheduled op: fused_softmax1_cast4
- Dispatch to pre-scheduled op: fused_NT_matmul_divide_maximum_minimum_cast
- Dispatch to pre-scheduled op: decode3
- Dispatch to pre-scheduled op: fused_NT_matmul3_silu1
Traceback (most recent call last):
  File "/home/peter/_Git/mlc-llm/mlc-llm/build.py", line 417, in <module>
    main()
  File "/home/peter/_Git/mlc-llm/mlc-llm/build.py", line 409, in main
    build(mod, ARGS)
  File "/home/peter/_Git/mlc-llm/mlc-llm/build.py", line 342, in build
    ex = relax.build(mod_deploy, args.target, system_lib=args.system_lib)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/peter/micromamba/envs/mlc-llm-env/lib/python3.11/site-packages/tvm/relax/vm_build.py", line 338, in build
    return _vmlink(builder, target, tir_mod, ext_libs, params, system_lib=system_lib)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/peter/micromamba/envs/mlc-llm-env/lib/python3.11/site-packages/tvm/relax/vm_build.py", line 242, in _vmlink
    lib = tvm.build(
          ^^^^^^^^^^
  File "/home/peter/micromamba/envs/mlc-llm-env/lib/python3.11/site-packages/tvm/driver/build_module.py", line 281, in build
    rt_mod_host = _driver_ffi.tir_to_runtime(annotated_mods, target_host)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/peter/micromamba/envs/mlc-llm-env/lib/python3.11/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 238, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  3: TVMFuncCall
  2: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::runtime::Module (tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target)>::AssignTypedLambda<tvm::{lambda(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target)#6}>(tvm::{lambda(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target)#6}, std::string)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  1: tvm::TIRToRuntime(tvm::runtime::Map<tvm::Target, tvm::IRModule, void, void> const&, tvm::Target const&)
  0: tvm::codegen::Build(tvm::IRModule, tvm::Target)
  File "/workspace/tvm/src/target/codegen.cc", line 57
TVMError:
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
  Check failed: (bf != nullptr) is false: target.build.cuda is not enabled

To Reproduce

Steps to reproduce the behavior:

  1. Follow the steps from https://github.com/mlc-ai/mlc-llm#hugging-face-url for https://huggingface.co/eachadea/vicuna-7b-1.1

Expected behavior

I expect the compilation to succeed.

Environment

Additional context

❯ neofetch
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   peter@mtorch162
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   ---------------
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   OS: Manjaro Linux x86_64
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   Host: KVM/QEMU (Standard PC (Q35 + ICH9, 2009) pc-q35-7.1)
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ            β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   Kernel: 6.1.31-2-MANJARO
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   Uptime: 5 mins
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   Packages: 1531 (pacman)
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   Shell: zsh 5.9
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   Resolution: 1024x768
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   Terminal: /dev/pts/0
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   CPU: Intel Xeon E5-2689 0 (32) @ 2.600GHz
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   GPU: NVIDIA GeForce RTX 2070
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   Memory: 667MiB / 32105MiB
❯ micromamba list
List of packages in environment: "/home/peter/micromamba/envs/mlc-llm-env"

  Name                Version       Build                         Channel
───────────────────────────────────────────────────────────────────────────────
  _libgcc_mutex       0.1           conda_forge                   conda-forge
  _openmp_mutex       4.5           2_kmp_llvm                    conda-forge
  blas                2.116         mkl                           conda-forge
  blas-devel          3.9.0         16_linux64_mkl                conda-forge
  brotli              1.0.9         h166bdaf_8                    conda-forge
  brotli-bin          1.0.9         h166bdaf_8                    conda-forge
  bzip2               1.0.8         h7b6447c_0
  c-ares              1.19.1        hd590300_0                    conda-forge
  ca-certificates     2023.01.10    h06a4308_0
  certifi             2023.5.7      pyhd8ed1ab_0                  conda-forge
  charset-normalizer  3.1.0         pyhd8ed1ab_0                  conda-forge
  cuda-cudart         11.8.89       0                             nvidia
  cuda-cupti          11.8.87       0                             nvidia
  cuda-libraries      11.8.0        0                             nvidia
  cuda-nvrtc          11.8.89       0                             nvidia
  cuda-nvtx           11.8.86       0                             nvidia
  cuda-runtime        11.8.0        0                             nvidia
  cudatoolkit         11.7.0        hd8887f6_10                   nvidia
  curl                8.1.2         h409715c_0                    conda-forge
  ffmpeg              4.3           hf484d3e_0                    pytorch
  filelock            3.12.0        pyhd8ed1ab_0                  conda-forge
  freetype            2.12.1        hca18f0e_1                    conda-forge
  gettext             0.21.1        h27087fc_0                    conda-forge
  git                 2.41.0        pl5321h86e50cf_0              conda-forge
  git-lfs             3.3.0         ha770c72_0                    conda-forge
  gmp                 6.2.1         h58526e2_0                    conda-forge
  gnutls              3.6.13        h85f3911_1                    conda-forge
  icu                 72.1          hcb278e6_0                    conda-forge
  idna                3.4           pyhd8ed1ab_0                  conda-forge
  jinja2              3.1.2         pyhd8ed1ab_1                  conda-forge
  jpeg                9e            h0b41bf4_3                    conda-forge
  keyutils            1.6.1         h166bdaf_0                    conda-forge
  krb5                1.20.1        h81ceb04_0                    conda-forge
  lame                3.100         h166bdaf_1003                 conda-forge
  lcms2               2.15          hfd0df8a_0                    conda-forge
  ld_impl_linux-64    2.38          h1181459_1
  lerc                4.0.0         h27087fc_0                    conda-forge
  libblas             3.9.0         16_linux64_mkl                conda-forge
  libbrotlicommon     1.0.9         h166bdaf_8                    conda-forge
  libbrotlidec        1.0.9         h166bdaf_8                    conda-forge
  libbrotlienc        1.0.9         h166bdaf_8                    conda-forge
  libcblas            3.9.0         16_linux64_mkl                conda-forge
  libcublas           11.11.3.6     0                             nvidia
  libcufft            10.9.0.58     0                             nvidia
  libcufile           1.6.1.9       0                             nvidia
  libcurand           10.3.2.106    0                             nvidia
  libcurl             8.1.2         h409715c_0                    conda-forge
  libcusolver         11.4.1.48     0                             nvidia
  libcusparse         11.7.5.86     0                             nvidia
  libdeflate          1.17          h0b41bf4_0                    conda-forge
  libedit             3.1.20191231  he28a2e2_2                    conda-forge
  libev               4.33          h516909a_1                    conda-forge
  libexpat            2.5.0         hcb278e6_1                    conda-forge
  libffi              3.4.4         h6a678d5_0
  libgcc-ng           13.1.0        he5830b7_0                    conda-forge
  libgfortran-ng      13.1.0        h69a702a_0                    conda-forge
  libgfortran5        13.1.0        h15d22d2_0                    conda-forge
  libgomp             13.1.0        he5830b7_0                    conda-forge
  libhwloc            2.9.1         cuda112_haf10fcf_5            conda-forge
  libiconv            1.17          h166bdaf_0                    conda-forge
  liblapack           3.9.0         16_linux64_mkl                conda-forge
  liblapacke          3.9.0         16_linux64_mkl                conda-forge
  libnghttp2          1.52.0        h61bc06f_0                    conda-forge
  libnpp              11.8.0.86     0                             nvidia
  libnsl              2.0.0         h7f98852_0                    conda-forge
  libnvjpeg           11.9.0.86     0                             nvidia
  libpng              1.6.39        h753d276_0                    conda-forge
  libsqlite           3.42.0        h2797004_0                    conda-forge
  libssh2             1.11.0        h0841786_0                    conda-forge
  libstdcxx-ng        13.1.0        hfd8a6a1_0                    conda-forge
  libtiff             4.5.0         h6adf6a1_2                    conda-forge
  libuuid             2.38.1        h0b41bf4_0                    conda-forge
  libvulkan-loader    1.3.239.0     h1fe2b44_1                    conda-forge
  libwebp-base        1.3.0         h0b41bf4_0                    conda-forge
  libxcb              1.13          h7f98852_1004                 conda-forge
  libxml2             2.11.4        h0d562d8_0                    conda-forge
  libzlib             1.2.13        h166bdaf_4                    conda-forge
  llvm-openmp         16.0.5        h4dfa4b3_0                    conda-forge
  markupsafe          2.1.3         py311h459d7ec_0               conda-forge
  mkl                 2022.1.0      h84fe81f_915                  conda-forge
  mkl-devel           2022.1.0      ha770c72_916                  conda-forge
  mkl-include         2022.1.0      h84fe81f_915                  conda-forge
  mlc-chat-nightly    0.1.dev142    142_ga985533_h1234567_0       mlc-ai
  mpmath              1.3.0         pyhd8ed1ab_0                  conda-forge
  ncurses             6.4           h6a678d5_0
  nettle              3.6           he412f7d_0                    conda-forge
  networkx            3.1           pyhd8ed1ab_0                  conda-forge
  numpy               1.24.3        py311h64a7726_0               conda-forge
  openh264            2.1.1         h780b84a_0                    conda-forge
  openjpeg            2.5.0         hfec8fc6_2                    conda-forge
  openssl             3.1.1         hd590300_1                    conda-forge
  pcre2               10.40         hc3806b6_0                    conda-forge
  perl                5.32.1        2_h7f98852_perl5              conda-forge
  pillow              9.4.0         py311h50def17_1               conda-forge
  pip                 23.0.1        py311h06a4308_0
  pthread-stubs       0.4           h36c2ea0_1001                 conda-forge
  pysocks             1.7.1         pyha2e5f31_6                  conda-forge
  python              3.11.3        h2755cc3_0_cpython            conda-forge
  python_abi          3.11          2_cp311                       conda-forge
  pytorch             2.0.1         py3.11_cuda11.8_cudnn8.7.0_0  pytorch
  pytorch-cuda        11.8          h7e8668a_5                    pytorch
  pytorch-mutex       1.0           cuda                          pytorch
  readline            8.2           h5eee18b_0
  requests            2.31.0        pyhd8ed1ab_0                  conda-forge
  setuptools          67.8.0        py311h06a4308_0
  sqlite              3.41.2        h5eee18b_0
  sympy               1.12          pyh04b8f61_3                  conda-forge
  tbb                 2021.9.0      hf52228f_0                    conda-forge
  tk                  8.6.12        h1ccaba5_0
  torchaudio          2.0.2         py311_cu118                   pytorch
  torchtriton         2.0.0         py311                         pytorch
  torchvision         0.15.2        py311_cu118                   pytorch
  typing_extensions   4.6.3         pyha770c72_0                  conda-forge
  tzdata              2023c         h04d1e81_0
  urllib3             2.0.2         pyhd8ed1ab_0                  conda-forge
  wheel               0.38.4        py311h06a4308_0
  xorg-kbproto        1.0.7         h7f98852_1002                 conda-forge
  xorg-libx11         1.8.4         h0b41bf4_0                    conda-forge
  xorg-libxau         1.0.11        hd590300_0                    conda-forge
  xorg-libxdmcp       1.1.3         h7f98852_0                    conda-forge
  xorg-xextproto      7.3.0         h0b41bf4_1003                 conda-forge
  xorg-xproto         7.0.31        h7f98852_1007                 conda-forge
  xz                  5.4.2         h5eee18b_0
  zlib                1.2.13        h166bdaf_4                    conda-forge
  zstd                1.5.2         h3eb15da_6                    conda-forge
yzh119 commented 1 year ago

The reason you got this error because you are using the TVM Unity pre-built wheel for the CPU, you should install the CUDA 11.8 version instead:

pip install --pre mlc-ai-nightly-cu118 -f https://mlc.ai/wheels

I also noticed that you are using python 3.11 and we haven't provided pre-built CUDA wheels for python 3.11, we are fixing this in https://github.com/mlc-ai/package/pull/19, and you are expected to see this coming soon.

yongbing commented 1 year ago

I encounter a similar error . LLVM ERROR: when run : python3 ./build.py --hf-path databricks/dolly-v2-3b --target cuda Using path "dist/models/dolly-v2-3b" for model "dolly-v2-3b" Database paths: ['log_db/rwkv-raven-1b5', 'log_db/vicuna-v1-7b', 'log_db/redpajama-3b-q4f16', 'log_db/rwkv-raven-7b', 'log_db/redpajama-3b-q4f32', 'log_db/rwkv-raven-3b', 'log_db/dolly-v2-3b'] Target configured: cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32 LLVM ERROR:

image

with env: SE_GTEST: AUTO SUMMARIZE: OFF USE_IOS_RPC: OFF CUDA_VERSION: 11.2 USE_LIBBACKTRACE: AUTO DLPACK_PATH: 3rdparty/dlpack/include USE_TENSORRT_CODEGEN: OFF USE_THRUST: OFF USE_TARGET_ONNX: OFF USE_AOT_EXECUTOR: ON BUILD_DUMMY_LIBTVM: OFF USE_CUDNN: ON USE_TENSORRT_RUNTIME: OFF USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF USE_CCACHE: AUTO USE_ARM_COMPUTE_LIB: OFF USE_CPP_RTVM: OFF USE_OPENCL_GTEST: /path/to/opencl/gtest USE_MKL: OFF USE_PT_TVMDSOOP: OFF USE_CLML: OFF USE_STACKVM_RUNTIME: OFF USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF ROCM_PATH: /opt/rocm USE_DNNL: OFF USE_VITIS_AI: OFF USE_LLVM: ON USE_VERILATOR: OFF USE_TF_TVMDSOOP: OFF USE_THREADS: ON USE_MSVC_MT: OFF BACKTRACE_ON_SEGFAULT: OFF USE_GRAPH_EXECUTOR: ON USE_ROCBLAS: OFF GIT_COMMIT_HASH: 6fd55bcfecc7abcc707339d7a8ba493f0048b613 USE_VULKAN: OFF USE_RUST_EXT: OFF USE_CUTLASS: OFF USE_CPP_RPC: OFF USE_HEXAGON: OFF USE_CUSTOM_LOGGING: OFF USE_UMA: OFF USE_FALLBACK_STL_MAP: OFF USE_SORT: ON USE_RTTI: ON GIT_COMMIT_TIME: 2023-06-05 12:18:09 -0700 USE_HEXAGON_SDK: /path/to/sdk USE_BLAS: none USE_ETHOSN: OFF USE_LIBTORCH: OFF USE_RANDOM: ON USE_CUDA: ON USE_COREML: OFF USE_AMX: OFF BUILD_STATIC_RUNTIME: OFF USE_CMSISNN: OFF USE_KHRONOS_SPIRV: OFF USE_CLML_GRAPH_EXECUTOR: OFF USE_TFLITE: OFF USE_HEXAGON_GTEST: /path/to/hexagon/gtest PICOJSON_PATH: 3rdparty/picojson USE_OPENCL_ENABLE_HOST_PTR: OFF INSTALL_DEV: OFF USE_PROFILER: ON USE_NNPACK: OFF LLVM_VERSION: 6.0.0 USE_OPENCL: OFF COMPILER_RT_PATH: 3rdparty/compiler-rt RANG_PATH: 3rdparty/rang/include USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF USE_OPENMP: none USE_BNNS: OFF USE_CUBLAS: ON USE_METAL: OFF USE_MICRO_STANDALONE_RUNTIME: OFF USE_HEXAGON_EXTERNAL_LIBS: OFF USE_ALTERNATIVE_LINKER: AUTO USE_BYODT_POSIT: OFF USE_HEXAGON_RPC: OFF USE_MICRO: OFF DMLC_PATH: 3rdparty/dmlc-core/include INDEX_DEFAULT_I64: ON USE_RELAY_DEBUG: OFF USE_RPC: ON USE_TENSORFLOW_PATH: none TVM_CLML_VERSION: USE_MIOPEN: OFF USE_ROCM: OFF USE_PAPI: OFF USE_CURAND: OFF TVM_CXX_COMPILER_PATH: /usr/bin/c++ HIDE_PRIVATE_SYMBOLS: OFF

yzh119 commented 1 year ago

Hi @pgagarinov , we have updated the wheels and now mlc-ai-nightly-cu118 for python 3.11 is available, please try uninstalling and mlc-ai-nightly and install mlc-ai-nightly-cu118 instead. That should solve the issue here:

pip uninstall mlc-ai-nightly
pip install --pre mlc-ai-nightly-cu118 -f https://mlc.ai/wheels
yzh119 commented 1 year ago

Hi @yongbing would you mind creating another issue and elaborate on the LLVM error there? These two errors do not look similar.

yongbing commented 1 year ago

Hi @yongbing would you mind creating another issue and elaborate on the LLVM error there? These two errors do not look similar.

ok. alreay create a new issue https://github.com/mlc-ai/mlc-llm/issues/356 and ref yo. Tnx.

Hi @yongbing would you mind creating another issue and elaborate on the LLVM error there? These two errors do not look similar.

junrushao commented 1 year ago

I believe this issue will be gone following Zihao's suggestion: https://github.com/mlc-ai/mlc-llm/issues/339#issuecomment-1579587863. Please feel free to create a new one if it persists.