mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.08k stars 1.56k forks source link

[Bug] Compiling Llama-2 with q3f16 and failed with errors KeyError: IterSplit(IterMark(v1, extent=T.int64(4096)), lower_factor=T.int64(1), extent=T.int64(4096), scale=T.int64(1)) #1023

Closed JimmyLi-Network closed 1 year ago

JimmyLi-Network commented 1 year ago

🐛 Bug

Compiling Llama-2 with q3f16 and failed with errors KeyError: IterSplit(IterMark(v1, extent=T.int64(4096)), lower_factor=T.int64(1), extent=T.int64(4096), scale=T.int64(1))

To Reproduce

Steps to reproduce the behavior:

  1. Compile the model according to the documents.
  2. git clone https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
  3. python3 -m mlc_llm.build --model Llama-2-7b-chat-hf --target metal --quantization q3f16_1

Error Messages: Save a cached module to dist/Llama-2-7b-chat-hf-q3f16_1/mod_cache_before_build.pkl. Traceback (most recent call last): File "/Users/jimmy/anaconda3/envs/tvm/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/jimmy/anaconda3/envs/tvm/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/jimmy/Code/mlc-llm/mlc_llm/build.py", line 13, in main() File "/Users/jimmy/Code/mlc-llm/mlc_llm/build.py", line 10, in main core.build_model_from_args(parsed_args) File "/Users/jimmy/Code/mlc-llm/mlc_llm/core.py", line 655, in build_model_from_args build(mod, args) File "/Users/jimmy/Code/mlc-llm/mlc_llm/core.py", line 514, in build mod_deploy = dl.ApplyDefaultSchedule( # pylint: disable=not-callable File "/Users/jimmy/.local/lib/python3.8/site-packages/tvm-0.12.dev1610+gceaf7b015-py3.8-macosx-11.0-arm64.egg/tvm/ir/transform.py", line 238, in call return _ffi_transform_api.RunPass(self, mod) File "tvm/_ffi/_cython/./packed_func.pxi", line 332, in tvm._ffi._cy3.core.PackedFuncBase.call File "tvm/_ffi/_cython/./packed_func.pxi", line 263, in tvm._ffi._cy3.core.FuncCall File "tvm/_ffi/_cython/./packed_func.pxi", line 252, in tvm._ffi._cy3.core.FuncCall3 File "tvm/_ffi/_cython/./base.pxi", line 182, in tvm._ffi._cy3.core.CHECK_CALL File "/Users/jimmy/.local/lib/python3.8/site-packages/tvm-0.12.dev1610+gceaf7b015-py3.8-macosx-11.0-arm64.egg/tvm/_ffi/base.py", line 476, in raise_last_ffi_error raise py_err File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback File "/Users/jimmy/.local/lib/python3.8/site-packages/tvm-0.12.dev1610+gceaf7b015-py3.8-macosx-11.0-arm64.egg/tvm/ir/transform.py", line 307, in _pass_func return inst.transform_module(mod, ctx) File "/Users/jimmy/.local/lib/python3.8/site-packages/tvm-0.12.dev1610+gceaf7b015-py3.8-macosx-11.0-arm64.egg/tvm/dlight/base/transform.py", line 64, in transform_module sch = _apply_rules(func, target, self.rules, tunable=False) File "/Users/jimmy/.local/lib/python3.8/site-packages/tvm-0.12.dev1610+gceaf7b015-py3.8-macosx-11.0-arm64.egg/tvm/dlight/base/transform.py", line 80, in _apply_rules space = rule.apply(func, target, tunable) File "/Users/jimmy/.local/lib/python3.8/site-packages/tvm-0.12.dev1610+gceaf7b015-py3.8-macosx-11.0-arm64.egg/tvm/dlight/gpu/gemv.py", line 191, in apply is_inner_reduction = normalize(sch, block_info) File "/Users/jimmy/.local/lib/python3.8/site-packages/tvm-0.12.dev1610+gceaf7b015-py3.8-macosx-11.0-arm64.egg/tvm/dlight/gpu/gemv.py", line 122, in normalize is_inner_reduction = iter_to_info[inner_axis].kind == "R" KeyError: IterSplit(IterMark(v1, extent=T.int64(4096)), lower_factor=T.int64(1), extent=T.int64(4096), scale=T.int64(1))

Expected behavior

Expected compilation is successful.

Environment

Additional context

USE_NVTX: OFF USE_GTEST: AUTO SUMMARIZE: OFF USE_IOS_RPC: OFF USE_MSC: OFF USE_ETHOSU: OFF CUDA_VERSION: NOT-FOUND USE_LIBBACKTRACE: AUTO DLPACK_PATH: 3rdparty/dlpack/include USE_TENSORRT_CODEGEN: OFF USE_THRUST: OFF USE_TARGET_ONNX: OFF USE_AOT_EXECUTOR: ON BUILD_DUMMY_LIBTVM: OFF USE_CUDNN: OFF USE_TENSORRT_RUNTIME: OFF USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF USE_CCACHE: AUTO USE_ARM_COMPUTE_LIB: OFF USE_CPP_RTVM: ON USE_OPENCL_GTEST: /path/to/opencl/gtest USE_MKL: OFF USE_PT_TVMDSOOP: OFF MLIR_VERSION: NOT-FOUND USE_CLML: OFF USE_STACKVM_RUNTIME: OFF USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF ROCM_PATH: /opt/rocm USE_DNNL: OFF USE_VITIS_AI: OFF USE_MLIR: OFF USE_RCCL: OFF USE_LLVM: ON USE_VERILATOR: OFF USE_TF_TVMDSOOP: OFF USE_THREADS: ON USE_MSVC_MT: OFF BACKTRACE_ON_SEGFAULT: OFF USE_GRAPH_EXECUTOR: ON USE_NCCL: OFF USE_ROCBLAS: OFF GIT_COMMIT_HASH: ceaf7b0156524d30537a3de5fa30764eaff4edb8 USE_VULKAN: OFF USE_RUST_EXT: OFF USE_CUTLASS: OFF USE_CPP_RPC: OFF USE_HEXAGON: OFF USE_CUSTOM_LOGGING: OFF USE_UMA: OFF USE_FALLBACK_STL_MAP: OFF USE_SORT: ON USE_RTTI: ON GIT_COMMIT_TIME: 2023-09-18 20:10:22 -0400 USE_HEXAGON_SDK: /path/to/sdk USE_BLAS: openblas USE_ETHOSN: OFF USE_LIBTORCH: OFF USE_RANDOM: ON USE_CUDA: OFF USE_COREML: OFF USE_AMX: OFF BUILD_STATIC_RUNTIME: OFF USE_CMSISNN: OFF USE_KHRONOS_SPIRV: OFF USE_CLML_GRAPH_EXECUTOR: OFF USE_TFLITE: OFF USE_HEXAGON_GTEST: /path/to/hexagon/gtest PICOJSON_PATH: 3rdparty/picojson USE_OPENCL_ENABLE_HOST_PTR: OFF INSTALL_DEV: OFF USE_PROFILER: ON USE_NNPACK: OFF LLVM_VERSION: 17.0.1 USE_OPENCL: OFF COMPILER_RT_PATH: 3rdparty/compiler-rt RANG_PATH: 3rdparty/rang/include USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF USE_OPENMP: none USE_BNNS: OFF USE_CUBLAS: OFF USE_METAL: ON USE_MICRO_STANDALONE_RUNTIME: ON USE_HEXAGON_EXTERNAL_LIBS: OFF USE_ALTERNATIVE_LINKER: AUTO USE_BYODT_POSIT: OFF USE_HEXAGON_RPC: OFF USE_MICRO: OFF DMLC_PATH: 3rdparty/dmlc-core/include INDEX_DEFAULT_I64: ON USE_RELAY_DEBUG: OFF USE_RPC: ON USE_TENSORFLOW_PATH: none TVM_CLML_VERSION: USE_MIOPEN: OFF USE_ROCM: OFF USE_PAPI: OFF USE_CURAND: OFF TVM_CXX_COMPILER_PATH: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ HIDE_PRIVATE_SYMBOLS: OFF

vinx13 commented 1 year ago

I sent a fix in https://github.com/apache/tvm/pull/15881

isaac621 commented 1 year ago

@JimmyLi-Network Could you compile the model successfully after the fix is merged?

JimmyLi-Network commented 1 year ago

@JimmyLi-Network Could you compile the model successfully after the fix is merged?

Hi,

I have merged and updated the repo of TVM. But the problem remains.

KeyError: IterSplit(IterMark(v1, extent=T.int64(11008)), lower_factor=T.int64(1), extent=T.int64(11008), scale=T.int64(1))

Do I need to merge the mlc-llm as well?

junrushao commented 1 year ago

The issue is reported as "fixed" in the other thread: https://github.com/mlc-ai/mlc-llm/issues/1005. To double check if you have installed the latest version of TVM Unity, @JimmyLi-Network could you please follow the "Step 3" in here and take a look at the git commit version: https://llm.mlc.ai/docs/install/tvm.html#validate-installation?

JimmyLi-Network commented 1 year ago

The issue is reported as "fixed" in the other thread: #1005. To double check if you have installed the latest version of TVM Unity, @JimmyLi-Network could you please follow the "Step 3" in here and take a look at the git commit version: https://llm.mlc.ai/docs/install/tvm.html#validate-installation?

Thanks! I git fetch from the Tvm repo and installed it again.

My GitHub commit time is GIT_COMMIT_TIME: 2023-09-18 20:10:22 -0400

Is that too old?

Thanks,

junrushao commented 1 year ago

Yes. Please follow the steps here to reinstall the latest packages: https://llm.mlc.ai/docs/install/tvm.html#option-1-prebuilt-package

JimmyLi-Network commented 1 year ago

The issue is reported as "fixed" in the other thread: #1005. To double check if you have installed the latest version of TVM Unity, @JimmyLi-Network could you please follow the "Step 3" in here and take a look at the git commit version: https://llm.mlc.ai/docs/install/tvm.html#validate-installation?

Thanks! I git fetch from the Tvm repo and installed it again.

My GitHub commit time is GIT_COMMIT_TIME: 2023-09-18 20:10:22 -0400

Is that too old?

Thanks,

Hi,

I built from the source of relax/tvm. It works and no errors.

Thanks.

junrushao commented 1 year ago

Thanks! Seems that this issue has been resolved