dusty-nv / jetson-containers

Machine Learning Containers for NVIDIA Jetson and JetPack-L4T
MIT License
1.93k stars 422 forks source link

Mixtral not able to run on Nvidia Jetson #385

Closed raj-khare closed 5 months ago

raj-khare commented 5 months ago

I'm trying run to Mixtral 8-7B model on Jetson AGX (aarch64, sm_87). But getting the following error:

root@tegra-ubuntu:/# python3 /opt/mlc-llm/benchmark.py --model /data/models/mlc/dist/mixtral-4bit/ --prompt /data/prompts/completion_16.json --max
-new-tokens 128 
Namespace(model='/data/models/mlc/dist/mixtral-4bit/', prompt=['/data/prompts/completion_16.json'], chat=False, streaming=False, max_new_tokens=128, max_num_prompts=None, save='')
-- loading /data/models/mlc/dist/mixtral-4bit/
[2024-02-14 10:49:26] INFO auto_device.py:76: Found device: cuda:0
[2024-02-14 10:49:27] INFO auto_device.py:85: Not found device: rocm:0
[2024-02-14 10:49:28] INFO auto_device.py:85: Not found device: metal:0
[2024-02-14 10:49:29] INFO auto_device.py:85: Not found device: vulkan:0
[2024-02-14 10:49:30] INFO auto_device.py:85: Not found device: opencl:0
[2024-02-14 10:49:30] INFO auto_device.py:33: Using device: cuda:0
[2024-02-14 10:49:30] INFO chat_module.py:370: Using model folder: /data/models/mlc/dist/mixtral-4bit
[2024-02-14 10:49:30] INFO chat_module.py:371: Using mlc chat config: /data/models/mlc/dist/mixtral-4bit/mlc-chat-config.json
[2024-02-14 10:49:30] INFO chat_module.py:513: Using library model: /data/models/mlc/dist/mixtral-4bit/None.so
[2024-02-14 10:49:31] INFO model_metadata.py:95: Total memory usage: 26206.65 MB (Parameters: 25053.70 MB. KVCache: 0.00 MB. Temporary buffer: 1152.95 MB)
[2024-02-14 10:49:31] INFO model_metadata.py:104: To reduce memory usage, tweak `prefill_chunk_size`, `context_window_size` and `sliding_window_size`

PROMPT:  Once upon a time, there was a little girl who loved to read.

Traceback (most recent call last):
  File "/opt/mlc-llm/benchmark.py", line 127, in <module>
    print(cm.benchmark_generate(prompt=prompt, generate_length=args.max_new_tokens).strip())
  File "/usr/local/lib/python3.10/dist-packages/mlc_chat/chat_module.py", line 977, in benchmark_generate
    self._prefill(prompt)
  File "/usr/local/lib/python3.10/dist-packages/mlc_chat/chat_module.py", line 1078, in _prefill
    self._prefill_func(
  File "tvm/_ffi/_cython/./packed_func.pxi", line 332, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 277, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./base.pxi", line 182, in tvm._ffi._cy3.core.CHECK_CALL
  File "/usr/local/lib/python3.10/dist-packages/tvm/_ffi/base.py", line 481, in raise_last_ffi_error
    raise py_err
tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) /usr/local/lib/python3.10/dist-packages/tvm/libtvm.so(tvm::runtime::relax_vm::VirtualMachineImpl::InvokeBytecode(long, std::vector<tvm::runtime::TVMRetValue, std::allocator<tvm::runtime::TVMRetValue> > const&)+0x1f0) [0xffff6bb0c050]
  [bt] (7) /usr/local/lib/python3.10/dist-packages/tvm/libtvm.so(tvm::runtime::relax_vm::VirtualMachineImpl::RunLoop()+0x208) [0xffff6bb0bc68]
  [bt] (6) /usr/local/lib/python3.10/dist-packages/tvm/libtvm.so(tvm::runtime::relax_vm::VirtualMachineImpl::RunInstrCall(tvm::runtime::relax_vm::VMFrame*, tvm::runtime::relax_vm::Instruction)+0x68c) [0xffff6bb0d2bc]
  [bt] (5) /usr/local/lib/python3.10/dist-packages/tvm/libtvm.so(tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)+0x84) [0xffff6bb09fb4]
  [bt] (4) /usr/local/lib/python3.10/dist-packages/tvm/libtvm.so(+0x30b8ca4) [0xffff6bac8ca4]
  [bt] (3) /usr/local/lib/python3.10/dist-packages/tvm/libtvm.so(+0x307ac6c) [0xffff6ba8ac6c]
  [bt] (2) /usr/local/lib/python3.10/dist-packages/tvm/libtvm.so(+0x307aa68) [0xffff6ba8aa68]
  [bt] (1) /usr/local/lib/python3.10/dist-packages/tvm/libtvm.so(tvm::runtime::detail::LogFatal::Entry::Finalize()+0x68) [0xffff69c336a8]
  [bt] (0) /usr/local/lib/python3.10/dist-packages/tvm/libtvm.so(tvm::runtime::Backtrace[abi:cxx11]()+0x30) [0xffff6ba8d050]
  File "/opt/mlc-llm/3rdparty/tvm/src/runtime/library_module.cc", line 78
TVMError: Assert fail: T.tvm_struct_get(indptr_handle, 0, 5, "uint8") == T.uint8(0) and T.tvm_struct_get(indptr_handle, 0, 6, "uint8") == T.uint8(32) and T.tvm_struct_get(indptr_handle, 0, 7, "uint16") == T.uint16(1), dequantize_group_gemm.indptr_handle.dtype is expected to be int32

My chat config

cfg = ChatConfig(max_gen_len=args.max_new_tokens, context_window_size=4096, prefill_chunk_size=4096, sliding_window_size=1024)

if not args.chat:
    cfg.conv_template = 'LM'

cm = ChatModule(model="/data/models/mlc/dist/mixtral-4bit", model_lib_path="/data/models/mlc/dist/mixtral-4bit/None.so", chat_config=cfg)

To Reproduce

Steps to reproduce the behavior:

I have compiled MLC LLM with the following FLAGS:

cmake -G Ninja \
     -DCMAKE_CXX_STANDARD=17 \
    -DCMAKE_CUDA_STANDARD=17 \
    -DCMAKE_CUDA_ARCHITECTURES=${CUDAARCHS} \
     -DUSE_CUDA=ON \
     -DFLASHINFER_CUDA_ARCHITECTURES=87 \
    -DUSE_FLASHINFER=ON \
    -DUSE_CUDNN=ON \
    -DUSE_CUBLAS=ON \
    -DUSE_CURAND=ON \
    -DUSE_CUTLASS=ON \
    -DUSE_THRUST=ON \
    -DUSE_GRAPH_EXECUTOR_CUDA_GRAPH=ON \
    -DUSE_STACKVM_RUNTIME=ON \
    -DUSE_LLVM="/usr/bin/llvm-config --link-static" \
    -DHIDE_PRIVATE_SYMBOLS=ON \
    -DSUMMARIZE=ON

Expected behavior

Model should run without any issue.

Environment

Platform (e.g. WebGPU/Vulkan/IOS/Android/CUDA): CUDA aarch64, sm_87 Operating system (e.g. Ubuntu/Windows/MacOS/...): Ubuntu Device (e.g. iPhone 12 Pro, PC+RTX 3090, ...): Nvidia Jetson AGX Orin 64GB How you installed MLC-LLM (conda, source): Docker How you installed TVM-Unity (pip, source): Docker Python version (e.g. 3.10): 3.10.12 GPU driver version (if applicable): none CUDA/cuDNN version (if applicable): cuda_12.2.r12.2/compiler.33191640_0 TVM Unity Hash Tag (python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))", applicable if you compile models):

USE_NVTX: OFF
USE_GTEST: OFF
SUMMARIZE: ON
USE_IOS_RPC: OFF
USE_MSC: OFF
USE_ETHOSU: 
CUDA_VERSION: 12.2
USE_LIBBACKTRACE: OFF
DLPACK_PATH: 3rdparty/dlpack/include
USE_TENSORRT_CODEGEN: OFF
USE_THRUST: ON
USE_TARGET_ONNX: OFF
USE_AOT_EXECUTOR: OFF
BUILD_DUMMY_LIBTVM: OFF
USE_CUDNN: ON
USE_TENSORRT_RUNTIME: OFF
USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF
USE_CCACHE: AUTO
USE_ARM_COMPUTE_LIB: OFF
USE_CPP_RTVM: 
USE_OPENCL_GTEST: /path/to/opencl/gtest
USE_MKL: OFF
USE_PT_TVMDSOOP: OFF
MLIR_VERSION: NOT-FOUND
USE_CLML: OFF
USE_STACKVM_RUNTIME: ON
USE_GRAPH_EXECUTOR_CUDA_GRAPH: ON
ROCM_PATH: /opt/rocm
USE_DNNL: OFF
USE_VITIS_AI: OFF
USE_MLIR: OFF
USE_RCCL: OFF
USE_LLVM: /usr/bin/llvm-config --link-static
USE_VERILATOR: OFF
USE_TF_TVMDSOOP: OFF
USE_THREADS: ON
USE_MSVC_MT: OFF
BACKTRACE_ON_SEGFAULT: OFF
USE_GRAPH_EXECUTOR: ON
USE_NCCL: OFF
USE_ROCBLAS: OFF
GIT_COMMIT_HASH: 292137088115ac81779607ca223bbbd9ad40cb55
USE_VULKAN: OFF
USE_RUST_EXT: OFF
USE_CUTLASS: ON
USE_CPP_RPC: OFF
USE_HEXAGON: OFF
USE_CUSTOM_LOGGING: OFF
USE_UMA: OFF
USE_FALLBACK_STL_MAP: OFF
USE_SORT: ON
USE_RTTI: ON
GIT_COMMIT_TIME: 2024-02-03 18:46:43 -0800
USE_HEXAGON_SDK: /path/to/sdk
USE_BLAS: none
USE_ETHOSN: OFF
USE_LIBTORCH: OFF
USE_RANDOM: ON
USE_CUDA: ON
USE_COREML: OFF
USE_AMX: OFF
BUILD_STATIC_RUNTIME: OFF
USE_CMSISNN: OFF
USE_KHRONOS_SPIRV: OFF
USE_CLML_GRAPH_EXECUTOR: OFF
USE_TFLITE: OFF
USE_HEXAGON_GTEST: /path/to/hexagon/gtest
PICOJSON_PATH: 3rdparty/picojson
USE_OPENCL_ENABLE_HOST_PTR: OFF
INSTALL_DEV: OFF
USE_PROFILER: OFF
USE_NNPACK: OFF
LLVM_VERSION: 17.0.6
USE_OPENCL: OFF
COMPILER_RT_PATH: 3rdparty/compiler-rt
RANG_PATH: 3rdparty/rang/include
USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF
USE_OPENMP: OFF
USE_BNNS: OFF
USE_CUBLAS: ON
USE_METAL: OFF
USE_MICRO_STANDALONE_RUNTIME: OFF
USE_HEXAGON_EXTERNAL_LIBS: OFF
USE_ALTERNATIVE_LINKER: AUTO
USE_BYODT_POSIT: OFF
USE_HEXAGON_RPC: OFF
USE_MICRO: OFF
DMLC_PATH: 3rdparty/dmlc-core/include
INDEX_DEFAULT_I64: ON
USE_RELAY_DEBUG: OFF
USE_RPC: OFF
USE_TENSORFLOW_PATH: none
TVM_CLML_VERSION: 
USE_MIOPEN: OFF
USE_ROCM: OFF
USE_PAPI: OFF
USE_CURAND: ON
TVM_CXX_COMPILER_PATH: /usr/bin/c++
HIDE_PRIVATE_SYMBOLS: ON

Any help is highly appreciated!

dusty-nv commented 5 months ago

@raj-khare I've not tested Mixtral via MLC yet - you might wanna file this issue against the upstream mlc_llm github as that is probably where it would end up going anyways 👍

dusty-nv commented 4 months ago

@raj-khare looks like it was fixed getting Mixtral to load: https://github.com/mlc-ai/mlc-llm/issues/1752#issuecomment-1950809882

It should be in dustynv/mlc:c30348a-r36.2.0 which is a commit newer than https://github.com/mlc-ai/mlc-llm/commit/bf05dfc4b428c0d8c86726b5136498ebea2882e9

Mind you, I am currently rebuilding/retesting again to pick up https://github.com/mlc-ai/mlc-llm/commit/a2d9eea1b7025b8174ebb7913dcf878bd8d13f13

raj-khare commented 4 months ago

yep! it works thanks :)