mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.23k stars 1.58k forks source link

[Bug] Speculative decoding doesn't work on Vulkan (AMD iGPU) #3011

Open SkyHeroesS opened 2 weeks ago

SkyHeroesS commented 2 weeks ago

🐛 Bug

I tried to use Qwen1.5-0.5B-Chat as draft for Qwen1.5-7B-Chat. But it turned out not giving any response in speculative_mode="small_draft". I also tried to use EAGLE-Qwen2-7B-Instruct and set speculative_mode="eagle". It still gave no outputs. I switched to llama-2-7b-chat-hf and tried again, there were still no output. I've checked the process, it seems stuck in "tvm_ffi_ctypes\packed_func.py". After I added "max_num_sequence=spec_draft_length+2 " in "engine_config", it turned into an error "TVMError: Check failed: draft_token_indices->size() == num_sequence (2 vs. 1) :".

To Reproduce

Steps to reproduce the behavior:

1.Download Qwen models or llama models 2.Quantization, gen_config and compile 3.run following sample to reproduce

`from mlc_llm.serve.sync_engine import EngineConfig, SyncMLCEngine from mlc_llm.protocol.generation_config import GenerationConfig

prompts = ["what is the meaning of life?"]

Create engine

model = "dist/Llama-2-7b-chat-q4f16_1-MLC" model_lib = "dist/libs/Llama-2-7b-chat-q4f16_1-vulkan.dll" small_model = "dist/Eagle-Llama-2-7b-chat-q4f16_1-MLC" #"dist/Qwen1.5-0.5B-Chat-q4f16_1-MLC" small_model_lib = ( "dist/libs/Eagle-Llama-2-7b-chat-q4f16_1-vulkan.dll" #"dist/libs/Qwen1.5-0.5B-Chat-q4f16_1-vulkan.dll" ) engine = SyncMLCEngine( model=model, model_lib=model_lib, mode="local", #local engine_config=EngineConfig( additional_models=[(small_model, small_model_lib)], spec_draft_length=5, max_num_sequence=7, speculative_mode="eagle",), )

num_requests = 1

Generate output.

outputtexts, = engine.generate( prompts[:num_requests], GenerationConfig(temperature=0.0, top_p=0, seed=42, max_tokens=256, stop_token_ids=[2], n=1) ) for req_id, outputs in enumerate(output_texts): print(f"Prompt {req_id}: {prompts[req_id]}") if len(outputs) == 1: print(f"Output {req_id}:{outputs[0]}\n") else: for i, output in enumerate(outputs): print(f"Output {req_id}({i}):{output}\n")`

  1. The error message is: (mlc-chat-env) C:\Users\Administrator\Desktop>python mlc_spec.py [2024-11-04 03:04:39] INFO auto_device.py:88: Not found device: cuda:0 [2024-11-04 03:04:40] INFO auto_device.py:88: Not found device: rocm:0 [2024-11-04 03:04:41] INFO auto_device.py:88: Not found device: metal:0 [2024-11-04 03:04:43] INFO auto_device.py:79: Found device: vulkan:0 [2024-11-04 03:04:44] INFO auto_device.py:88: Not found device: opencl:0 [2024-11-04 03:04:44] INFO auto_device.py:35: Using device: vulkan:0 [2024-11-04 03:04:44] INFO engine_base.py:143: Using library model: dist/libs/Llama-2-7b-chat-q4f16_1-vulkan.dll [2024-11-04 03:04:44] INFO engine_base.py:143: Using library model: dist/libs/Eagle-Llama-2-7b-chat-q4f16_1-vulkan.dll [2024-11-04 03:04:44] INFO engine_base.py:180: The selected engine mode is local. We choose small max batch size and KV cache capacity to use less GPU memory. [2024-11-04 03:04:44] INFO engine_base.py:205: If you don't have concurrent requests and only use the engine interactively, please select mode "interactive". [2024-11-04 03:04:44] INFO engine_base.py:210: If you have high concurrent requests and want to maximize the GPU memory utilization, please select mode "server". [03:04:44] D:\a\package\package\mlc-llm\cpp\serve\config.cc:688: Under mode "local", max batch size 7 is specified by user, max KV cache token capacity will be set to 768, prefill chunk size will be set to 768. [03:04:44] D:\a\package\package\mlc-llm\cpp\serve\config.cc:688: Under mode "interactive", max batch size 7 is specified by user, max KV cache token capacity will be set to 768, prefill chunk size will be set to 768. [03:04:44] D:\a\package\package\mlc-llm\cpp\serve\config.cc:688: Under mode "server", max batch size 7 is specified by user, max KV cache token capacity will be set to 4989, prefill chunk size will be set to 768. [03:04:44] D:\a\package\package\mlc-llm\cpp\serve\config.cc:769: The actual engine mode is "local". So max batch size is 7, max KV cache token capacity is 768, prefill chunk size is 768. [03:04:44] D:\a\package\package\mlc-llm\cpp\serve\config.cc:774: Estimated total single GPU memory usage: 4568.668 MB (Parameters: 3812.023 MB. KVCache: 540.390 MB. Temporary buffer: 216.255 MB). The actual usage might be slightly larger than the estimated number. [03:04:44] D:\a\package\package\mlc-llm\cpp\serve\engine.cc:365: Warning: Hybrid prefill mode fallbacks to chunked prefill, due to speculative mode is enabled and not implemented with hybrid prefill yet. Traceback (most recent call last): File "C:\Users\Administrator\Desktop\mlc_spec.py", line 27, in outputtexts, = engine.generate( ^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\miniconda3\envs\mlc-chat-env\Lib\site-packages\mlc_llm\serve\sync_engine.py", line 283, in generate self.step() File "C:\Users\Administrator\miniconda3\envs\mlc-chat-env\Lib\site-packages\mlc_llm\serve\sync_engine.py", line 351, in step self._ffi["step"]() File "C:\Users\Administrator\miniconda3\envs\mlc-chat-env\Lib\site-packages\tvm_ffi_ctypes\packed_func.py", line 245, in call raise_last_ffi_error() File "C:\Users\Administrator\miniconda3\envs\mlc-chat-env\Lib\site-packages\tvm_ffi\base.py", line 481, in raise_last_ffi_error raise py_err tvm._ffi.base.TVMError: Traceback (most recent call last): File "D:\a\package\package\mlc-llm\cpp\serve\logit_processor.cc", line 126 TVMError: Check failed: draft_token_indices->size() == num_sequence (2 vs. 1) :

Expected behavior

The model streams the output to the provided prompt.

Environment

Additional context

When I switch to "small_draft", the error message changed to following context. (mlc-chat-env) C:\Users\Administrator\Desktop>python mlc_qs.py [2024-11-04 03:06:21] INFO auto_device.py:88: Not found device: cuda:0 [2024-11-04 03:06:23] INFO auto_device.py:88: Not found device: rocm:0 [2024-11-04 03:06:24] INFO auto_device.py:88: Not found device: metal:0 [2024-11-04 03:06:25] INFO auto_device.py:79: Found device: vulkan:0 [2024-11-04 03:06:27] INFO auto_device.py:88: Not found device: opencl:0 [2024-11-04 03:06:27] INFO auto_device.py:35: Using device: vulkan:0 [2024-11-04 03:06:27] INFO engine_base.py:143: Using library model: dist/libs/Qwen1.5-0.5B-Chat-q4f16_1-vulkan.dll [2024-11-04 03:06:27] INFO engine_base.py:143: Using library model: dist\libs\Qwen1.5-0.5B-Chat-q4f16_1-vulkan.dll [2024-11-04 03:06:27] INFO engine_base.py:192: The selected engine mode is server. We use as much GPU memory as possible (within the limit of gpu_memory_utilization). [2024-11-04 03:06:27] INFO engine_base.py:200: If you have low concurrent requests and want to use less GPU memory, please select mode "local". [2024-11-04 03:06:27] INFO engine_base.py:205: If you don't have concurrent requests and only use the engine interactively, please select mode "interactive". [03:06:27] D:\a\package\package\mlc-llm\cpp\serve\config.cc:688: Under mode "local", max batch size 7 is specified by user, max KV cache token capacity will be set to 1024, prefill chunk size will be set to 1024. [03:06:27] D:\a\package\package\mlc-llm\cpp\serve\config.cc:688: Under mode "interactive", max batch size 7 is specified by user, max KV cache token capacity will be set to 1024, prefill chunk size will be set to 1024. [03:06:27] D:\a\package\package\mlc-llm\cpp\serve\config.cc:688: Under mode "server", max batch size 7 is specified by user, max KV cache token capacity will be set to 7168, prefill chunk size will be set to 1024. [03:06:27] D:\a\package\package\mlc-llm\cpp\serve\config.cc:769: The actual engine mode is "server". So max batch size is 7, max KV cache token capacity is 7168, prefill chunk size is 1024. [03:06:27] D:\a\package\package\mlc-llm\cpp\serve\config.cc:774: Estimated total single GPU memory usage: 3241.175 MB (Parameters: 498.145 MB. KVCache: 1456.284 MB. Temporary buffer: 1286.746 MB). The actual usage might be slightly larger than the estimated number. [03:06:27] D:\a\package\package\mlc-llm\cpp\serve\engine.cc:365: Warning: Hybrid prefill mode fallbacks to chunked prefill, due to speculative mode is enabled and not implemented with hybrid prefill yet. Traceback (most recent call last): File "C:\Users\Administrator\Desktop\mlc_qs.py", line 667, in test_engine_basic("dist/Qwen1.5-0.5B-Chat-q4f16_1-MLC","dist/libs/Qwen1.5-0.5B-Chat-q4f16_1-vulkan.dll",['dist\Qwen1.5-0.5B-Chat-q4f16_1-MLC','dist\libs\Qwen1.5-0.5B-Chat-q4f16_1-vulkan.dll']) File "C:\Users\Administrator\Desktop\mlc_qs.py", line 121, in test_engine_basic engine.step() File "C:\Users\Administrator\miniconda3\envs\mlc-chat-env\Lib\site-packages\mlc_llm\serve\sync_engine.py", line 351, in step self._ffi["step"]() File "C:\Users\Administrator\miniconda3\envs\mlc-chat-env\Lib\site-packages\tvm_ffi_ctypes\packed_func.py", line 245, in call raise_last_ffi_error() File "C:\Users\Administrator\miniconda3\envs\mlc-chat-env\Lib\site-packages\tvm_ffi\base.py", line 481, in raise_last_ffi_error raise py_err tvm._ffi.base.TVMError: Traceback (most recent call last): File "D:\a\package\package\mlc-llm\cpp\serve\engine_actions\batch_draft.cc", line 151 InternalError: Check failed: (!mstates[i]->draft_output_tokens.empty()) is false: