config = ChatConfig(max_batch_size=1, max_gen_len=500, temperature=0.0, top_p=0.0) #, conv_config=conv_config)
cm = ChatModule(
model=...
model_lib_path=...,
chat_config=config
)
# Generate a response for a given prompt
output = cm.generate(
prompt=TEST_PROMPT,
progress_callback=StreamToStdout(callback_interval=2),
)
The model was just repeating the same token over and over again infinitely.
After reading https://github.com/mlc-ai/mlc-llm/issues/978 & https://github.com/mlc-ai/mlc-llm/issues/802, I wanted to try using the build.py script to try compiling the model with Cutlass disabled to see if that would resolve the infinite repeating token problem. But then I ran into some TVM error when building. Any help would be appreciated, thanks!
To Reproduce
python build.py --model /data/ML_Workdir/models/mistral7b_1e-5_warmup_100/checkpoint-13129/ --quantization q0f16 --artifact-path /data/dist/mistral7b_1e-5_warmup_100-q0f16-MLC --max-seq-len 2048 --target cuda --no-cutlass-norm --no-cutlass-attn --use-safetensors --build-model-only
Using path "/data/ML_Workdir/models/mistral7b_1e-5_warmup_100/checkpoint-13129" for model "checkpoint-13129"
Target configured: cuda -keys=cuda,gpu -arch=sm_80 -max_num_threads=1024 -max_shared_memory_per_block=49152 -max_threads_per_block=1024 -registers_per_block=65536 -thread_warp_size=32
Traceback (most recent call last):
File "/data/mlc-llm/mlc_llm/build.py", line 47, in <module>
main()
File "/data/mlc-llm/mlc_llm/build.py", line 43, in main
core.build_model_from_args(parsed_args)
File "/data/mlc-llm/mlc_llm/core.py", line 859, in build_model_from_args
mod, param_manager, params, model_config = model_generators[args.model_category].get_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mlc-llm/mlc_llm/relax_model/mistral.py", line 1016, in get_model
create_encoding_func(bb, param_manager, config, args.quantization, sep_embed)
File "/data/mlc-llm/mlc_llm/relax_model/mistral.py", line 866, in create_encoding_func
logits, key_value_cache = model(
^^^^^^
File "/data/conda/envs/mlc-chat-venv/lib/python3.11/site-packages/tvm/relax/testing/nn.py", line 263, in __call__
return self.forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/conda/envs/mlc-chat-venv/lib/python3.11/site-packages/tvm/relax/frontend/nn/subroutine.py", line 87, in new_forward
return old_forward(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mlc-llm/mlc_llm/relax_model/mistral.py", line 766, in forward
hidden_states, key_value_cache = self.model(
^^^^^^^^^^^
File "/data/conda/envs/mlc-chat-venv/lib/python3.11/site-packages/tvm/relax/testing/nn.py", line 263, in __call__
return self.forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/conda/envs/mlc-chat-venv/lib/python3.11/site-packages/tvm/relax/frontend/nn/subroutine.py", line 87, in new_forward
return old_forward(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mlc-llm/mlc_llm/relax_model/mistral.py", line 720, in forward
hidden_states, key_value_cache = decoder_layer(
^^^^^^^^^^^^^^
File "/data/conda/envs/mlc-chat-venv/lib/python3.11/site-packages/tvm/relax/testing/nn.py", line 263, in __call__
return self.forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/conda/envs/mlc-chat-venv/lib/python3.11/site-packages/tvm/relax/frontend/nn/subroutine.py", line 87, in new_forward
return old_forward(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mlc-llm/mlc_llm/relax_model/mistral.py", line 581, in forward
hidden_states, present_key_value = self.self_attn(
^^^^^^^^^^^^^^^
File "/data/conda/envs/mlc-chat-venv/lib/python3.11/site-packages/tvm/relax/testing/nn.py", line 263, in __call__
return self.forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/conda/envs/mlc-chat-venv/lib/python3.11/site-packages/tvm/relax/frontend/nn/subroutine.py", line 87, in new_forward
return old_forward(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mlc-llm/mlc_llm/relax_model/mistral.py", line 489, in forward
key, value, updated_key_value = self.interleave_kv(
^^^^^^^^^^^^^^^^^^^
File "/data/mlc-llm/mlc_llm/relax_model/mistral.py", line 346, in interleave_kv
relax.call_pure_packed(
File "/data/conda/envs/mlc-chat-venv/lib/python3.11/site-packages/tvm/relax/utils.py", line 173, in wrapper
bound = sig.bind(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/conda/envs/mlc-chat-venv/lib/python3.11/inspect.py", line 3212, in bind
return self._bind(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/conda/envs/mlc-chat-venv/lib/python3.11/inspect.py", line 3201, in _bind
raise TypeError(
TypeError: got an unexpected keyword argument 'args'
[00:12:53] /workspace/tvm/src/relax/ir/block_builder.cc:65: Warning: BlockBuilder destroyed with remaining blocks!
Expected behavior
Environment
Platform (e.g. WebGPU/Vulkan/IOS/Android/CUDA): CUDA
Operating system (e.g. Ubuntu/Windows/MacOS/...): Ubuntu 22.04
How you installed MLC-LLM (conda, source): Yes, CUDA 12.1 Nightly Wheel
How you installed TVM-Unity (pip, source): Yes
Python version (e.g. 3.10): 3.11
GPU driver version (if applicable):
CUDA/cuDNN version (if applicable): 12.1
TVM Unity Hash Tag (python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))", applicable if you compile models):
USE_NVTX: OFF
USE_GTEST: AUTO
SUMMARIZE: OFF
USE_IOS_RPC: OFF
USE_MSC: OFF
USE_ETHOSU:
CUDA_VERSION: 12.1
USE_LIBBACKTRACE: AUTO
DLPACK_PATH: 3rdparty/dlpack/include
USE_TENSORRT_CODEGEN: OFF
USE_THRUST: OFF
USE_TARGET_ONNX: OFF
USE_AOT_EXECUTOR: ON
BUILD_DUMMY_LIBTVM: OFF
USE_CUDNN: OFF
USE_TENSORRT_RUNTIME: OFF
USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF
USE_CCACHE: AUTO
USE_ARM_COMPUTE_LIB: OFF
USE_CPP_RTVM:
USE_OPENCL_GTEST: /path/to/opencl/gtest
USE_MKL: OFF
USE_PT_TVMDSOOP: OFF
MLIR_VERSION: NOT-FOUND
USE_CLML: OFF
USE_STACKVM_RUNTIME: OFF
USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF
ROCM_PATH: /opt/rocm
USE_DNNL: OFF
USE_VITIS_AI: OFF
USE_MLIR: OFF
USE_RCCL: OFF
USE_LLVM: llvm-config --ignore-libllvm --link-static
USE_VERILATOR: OFF
USE_TF_TVMDSOOP: OFF
USE_THREADS: ON
USE_MSVC_MT: OFF
BACKTRACE_ON_SEGFAULT: OFF
USE_GRAPH_EXECUTOR: ON
USE_NCCL: ON
USE_ROCBLAS: OFF
GIT_COMMIT_HASH: 292137088115ac81779607ca223bbbd9ad40cb55
USE_VULKAN: ON
USE_RUST_EXT: OFF
USE_CUTLASS: ON
USE_CPP_RPC: OFF
USE_HEXAGON: OFF
USE_CUSTOM_LOGGING: OFF
USE_UMA: OFF
USE_FALLBACK_STL_MAP: OFF
USE_SORT: ON
USE_RTTI: ON
GIT_COMMIT_TIME: 2024-02-03 18:46:43 -0800
USE_HEXAGON_SDK: /path/to/sdk
USE_BLAS: none
USE_ETHOSN: OFF
USE_LIBTORCH: OFF
USE_RANDOM: ON
USE_CUDA: ON
USE_COREML: OFF
USE_AMX: OFF
BUILD_STATIC_RUNTIME: OFF
USE_CMSISNN: OFF
USE_KHRONOS_SPIRV: OFF
USE_CLML_GRAPH_EXECUTOR: OFF
USE_TFLITE: OFF
USE_HEXAGON_GTEST: /path/to/hexagon/gtest
PICOJSON_PATH: 3rdparty/picojson
USE_OPENCL_ENABLE_HOST_PTR: OFF
INSTALL_DEV: OFF
USE_PROFILER: ON
USE_NNPACK: OFF
LLVM_VERSION: 15.0.7
USE_OPENCL: OFF
COMPILER_RT_PATH: 3rdparty/compiler-rt
RANG_PATH: 3rdparty/rang/include
USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF
USE_OPENMP: OFF
USE_BNNS: OFF
USE_CUBLAS: ON
USE_METAL: OFF
USE_MICRO_STANDALONE_RUNTIME: OFF
USE_HEXAGON_EXTERNAL_LIBS: OFF
USE_ALTERNATIVE_LINKER: AUTO
USE_BYODT_POSIT: OFF
USE_HEXAGON_RPC: OFF
USE_MICRO: OFF
DMLC_PATH: 3rdparty/dmlc-core/include
INDEX_DEFAULT_I64: ON
USE_RELAY_DEBUG: OFF
USE_RPC: ON
USE_TENSORFLOW_PATH: none
TVM_CLML_VERSION:
USE_MIOPEN: OFF
USE_ROCM: OFF
USE_PAPI: OFF
USE_CURAND: OFF
TVM_CXX_COMPILER_PATH: /opt/rh/gcc-toolset-11/root/usr/bin/c++
HIDE_PRIVATE_SYMBOLS: ON
🐛 Bug
I have a custom fine-tuned Mistral 7B (with lots of additional added special tokens) that is aimed at generating answers up to 2048 sequence length.
At first, I went through the normal MLC flow as described in the documentation; note I explicitedly did not apply quantization:
When testing out the new compiled model with:
The model was just repeating the same token over and over again infinitely.
After reading https://github.com/mlc-ai/mlc-llm/issues/978 & https://github.com/mlc-ai/mlc-llm/issues/802, I wanted to try using the
build.py
script to try compiling the model with Cutlass disabled to see if that would resolve the infinite repeating token problem. But then I ran into some TVM error when building. Any help would be appreciated, thanks!To Reproduce
Expected behavior
Environment
Platform (e.g. WebGPU/Vulkan/IOS/Android/CUDA): CUDA
Operating system (e.g. Ubuntu/Windows/MacOS/...): Ubuntu 22.04
Device (e.g. iPhone 12 Pro, PC+RTX 3090, ...): A100
How you installed MLC-LLM (
conda
, source): Yes, CUDA 12.1 Nightly WheelHow you installed TVM-Unity (
pip
, source): YesPython version (e.g. 3.10): 3.11
GPU driver version (if applicable):
CUDA/cuDNN version (if applicable): 12.1
TVM Unity Hash Tag (
python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models):Any other relevant information:
Additional context