mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.21k stars 1.58k forks source link

[Bug] TVM ERROR when convert_weight #2537

Closed spiritfog closed 5 months ago

spiritfog commented 5 months ago

🐛 Bug TVM ERROR when convert_weight

llava model convert_weight failed, especially when quantization by tvm

To Reproduce

Steps to reproduce the behavior:

(mlc) # mlc_llm convert_weight /workspace/mlc_llm/download/llava-1.5-7b-hf/ --quantization q4f16_1 -o ./llava-1.5-7b-hf-MLC-q4f16_0
[2024-06-07 03:35:58] INFO auto_config.py:116: Found model configuration: /workspace/mlc_llm/download/llava-1.5-7b-hf/config.json
[2024-06-07 03:35:59] INFO auto_device.py:79: Found device: cuda:0
[2024-06-07 03:35:59] INFO auto_device.py:79: Found device: cuda:1
[2024-06-07 03:35:59] INFO auto_device.py:79: Found device: cuda:2
[2024-06-07 03:35:59] INFO auto_device.py:79: Found device: cuda:3
[2024-06-07 03:35:59] INFO auto_device.py:79: Found device: cuda:4
[2024-06-07 03:35:59] INFO auto_device.py:79: Found device: cuda:5
[2024-06-07 03:35:59] INFO auto_device.py:79: Found device: cuda:6
[2024-06-07 03:35:59] INFO auto_device.py:79: Found device: cuda:7
[2024-06-07 03:36:00] INFO auto_device.py:88: Not found device: rocm:0
[2024-06-07 03:36:01] INFO auto_device.py:88: Not found device: metal:0
[2024-06-07 03:36:02] INFO auto_device.py:88: Not found device: vulkan:0
[2024-06-07 03:36:03] INFO auto_device.py:88: Not found device: opencl:0
[2024-06-07 03:36:03] INFO auto_device.py:35: Using device: cuda:0
[2024-06-07 03:36:03] INFO auto_weight.py:71: Finding weights in: /workspace/mlc_llm/download/llava-1.5-7b-hf
[2024-06-07 03:36:03] INFO auto_weight.py:137: Not found Huggingface PyTorch
[2024-06-07 03:36:03] INFO auto_weight.py:144: Found source weight format: huggingface-safetensor. Source configuration: /workspace/mlc_llm/download/llava-1.5-7b-hf/model.safetensors.index.json
[2024-06-07 03:36:03] INFO auto_weight.py:107: Using source weight configuration: /workspace/mlc_llm/download/llava-1.5-7b-hf/model.safetensors.index.json. Use `--source` to override.
[2024-06-07 03:36:03] INFO auto_weight.py:111: Using source weight format: huggingface-safetensor. Use `--source-format` to override.
[2024-06-07 03:36:03] INFO auto_config.py:154: Found model type: llava. Use `--model-type` to override.
Weight conversion with arguments:
  --config          /workspace/mlc_llm/download/llava-1.5-7b-hf/config.json
  --quantization    GroupQuantize(name='q4f16_1', kind='group-quant', group_size=32, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float16', linear_weight_layout='NK', quantize_embedding=True, quantize_final_fc=True, num_elem_per_storage=8, num_storage_per_group=4, max_int_value=7)
  --model-type      llava
  --device          cuda:0
  --source          /workspace/mlc_llm/download/llava-1.5-7b-hf/model.safetensors.index.json
  --source-format   huggingface-safetensor
  --output          llava-1.5-7b-hf-MLC-q4f16_0
Start storing to cache llava-1.5-7b-hf-MLC-q4f16_0
[2024-06-07 03:36:11] INFO huggingface_loader.py:185: Loading HF parameters from: /workspace/mlc_llm/download/llava-1.5-7b-hf/model-00003-of-00003.safetensors                                                                      
[2024-06-07 03:36:12] INFO group_quantization.py:217: Compiling quantize function for key: ((32064, 4096), float16, cuda, axis=1, output_transpose=False)                                                                           
  0%|                                                                                                                                                                                                       | 0/590 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/mlc_llm/interface/convert_weight.py", line 129, in _param_generator
    for name, param in loader.load(device=args.device, preshard_funcs=preshard_funcs):
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/mlc_llm/loader/huggingface_loader.py", line 121, in load
    for name, loader_param in self._load_or_quantize(mlc_name, param, device):
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/mlc_llm/loader/huggingface_loader.py", line 164, in _load_or_quantize
    q_params = self.quantize_param_map.map_func[mlc_name](param)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/mlc_llm/quantization/group_quantization.py", line 218, in quantize_weight
    quantize_func = compile_quantize_func(_create_quantize_func(), device=device)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/mlc_llm/quantization/utils.py", line 80, in compile_quantize_func
    ex = relax.build(mod, target=target)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/tvm/relax/vm_build.py", line 341, in build
    return _vmlink(
           ^^^^^^^^
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/tvm/relax/vm_build.py", line 247, in _vmlink
    lib = tvm.build(
          ^^^^^^^^^^
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/tvm/driver/build_module.py", line 297, in build
    rt_mod_host = _driver_ffi.tir_to_runtime(annotated_mods, target_host)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "tvm/_ffi/_cython/./packed_func.pxi", line 332, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 263, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 252, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 182, in tvm._ffi._cy3.core.CHECK_CALL
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/tvm/_ffi/base.py", line 481, in raise_last_ffi_error
    raise py_err
  File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/tvm/contrib/nvcc.py", line 204, in tvm_callback_cuda_compile
    ptx = compile_cuda(code, target_format="fatbin")
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/tvm/contrib/nvcc.py", line 120, in compile_cuda
    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/envs/mlc/lib/python3.11/subprocess.py", line 1026, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/root/miniconda3/envs/mlc/lib/python3.11/subprocess.py", line 1955, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'nvcc'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/miniconda3/envs/mlc/bin/mlc_llm", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/mlc_llm/__main__.py", line 37, in main
    cli.main(sys.argv[2:])
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/mlc_llm/cli/convert_weight.py", line 88, in main
    convert_weight(
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/mlc_llm/interface/convert_weight.py", line 181, in convert_weight
    _convert_args(args)
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/mlc_llm/interface/convert_weight.py", line 145, in _convert_args
    tvmjs.dump_ndarray_cache(
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/tvm/contrib/tvmjs.py", line 272, in dump_ndarray_cache
    for k, origin_v in param_generator:
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/mlc_llm/interface/convert_weight.py", line 121, in _param_generator
    with Target.from_device(args.device), tqdm.redirect():
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/tvm/target/target.py", line 145, in __exit__
    _ffi_api.TargetExitScope(self)
  File "tvm/_ffi/_cython/./packed_func.pxi", line 332, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 263, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 252, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 182, in tvm._ffi._cy3.core.CHECK_CALL
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/tvm/_ffi/base.py", line 481, in raise_last_ffi_error
    raise py_err
tvm.error.InternalError: Traceback (most recent call last):
  2: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<void (tvm::Target)>::AssignTypedLambda<void (*)(tvm::Target)>(void (*)(tvm::Target), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
  1: tvm::Target::ExitWithScope()
  0: _ZN3tvm7runtime6deta
  File "/workspace/tvm/src/target/target.cc", line 747
**InternalError: Check failed: (entry->context_stack.top().same_as(*this)) is false:** 

Expected behavior

successful quantize the llava-1.5-7b model and convert it to MLC form,

Environment

Additional context

1. I think that error may rised by tvm, so verified it by tvm verify All verification is passed, except for the detection of the Vulkan device. When I input python -c "import tvm; print(tvm.vulkan().exist)", I got a error rather than True or False. the Error message:

(mlc) # python -c "import tvm; print(tvm.vulkan().exist)"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/tvm/_ffi/runtime_ctypes.py", line 343, in exist
    return self._GetDeviceAttr(self.device_type, self.device_id, 0) != 0
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/tvm/_ffi/runtime_ctypes.py", line 327, in _GetDeviceAttr
    return tvm.runtime._ffi_api.GetDeviceAttr(device_type, device_id, attr_id)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "tvm/_ffi/_cython/./packed_func.pxi", line 332, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 263, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 252, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 182, in tvm._ffi._cy3.core.CHECK_CALL
  File "/root/miniconda3/envs/mlc/lib/python3.11/site-packages/tvm/_ffi/base.py", line 481, in raise_last_ffi_error
    raise py_err
tvm.error.InternalError: Traceback (most recent call last):
  7: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<__mk_TVM1::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, __mk_TVM1, tvm::runtime::TVMRetValue)
  6: tvm::runtime::DeviceAPIManager::GetAPI(int, bool)
  5: tvm::runtime::DeviceAPIManager::GetAPI(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool) [clone .isra.0]
  4: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::vulkan::__mk_TVM0::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::vulkan::__mk_TVM0, tvm::runtime::TVMRetValue)
  3: tvm::runtime::vulkan::VulkanDeviceAPI::Global()
  2: tvm::runtime::vulkan::VulkanDeviceAPI::VulkanDeviceAPI()
  1: tvm::runtime::vulkan::VulkanInstance::VulkanInstance()
  0: _ZN3tvm7runtime6deta
  File "/workspace/tvm/src/runtime/vulkan/vulkan_instance.cc", line 111
InternalError: Check failed: (__e == VK_SUCCESS) is false: Vulkan Error, code=-9: VK_ERROR_INCOMPATIBLE_DRIVER

2. I think the error come from tvm, so try to convert model weight without quantization, when use mlc_llm convert_weight /workspace/mlc_llm/download/llava-1.5-7b-hf/ --quantization q0f16 -o ./llava-1.5-7b-hf-MLC-q0f16, there is no error

tqchen commented 5 months ago

Seems the error indicates it cannot find nvcc in your env. Make sure you install cuda and nvcc is in your PATH

spiritfog commented 5 months ago

Seems the error indicates it cannot find nvcc in your env. Make sure you install cuda and nvcc is in your PATH

3Q, I would have another try now!