Hello team,
Thanks for creating such an amazing engine. I ran Llama-3-8B-Instruct-q4f16_1-MLC in server mode with different batch sizes (2-128) but I still see my requests are being run sequentially. With interactive chat mode, that model runs at ~80t/s on a single MI60 which is great. But when doing batch inference I expect it to be larger than 80t/s.
I started with mlc_llm serve HF://mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC --mode server but it was too slow. Then, experimented with different options till I found out this one which worked at around 65t/s when batching:
mlc_llm serve HF://mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC --overrides "prefill_chunk_size=2048;max_num_sequence=128;context_window_size=4096" --mode server
Downloaded a benchmarking repo (MMLU-Pro) that accepts generic OpenAI API: git clone https://github.com/chigkim/Ollama-MMLU-Pro.git
Then, I ran batch benchmarking: python3 run_openai.py --url http://127.0.0.1:8000/v1 --category 'computer science' --verbosity 0 --parallel 64 --model HF://mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC
Each question is less than 4096 tokens. Each question took on average 2s to complete. It seems like mlc-llm is not doing batch inference even though it takes over 85% of 32GB VRAM. In comparison, I ran the same commands (except for 1 which was CUDA specific) RTX 3090 and I got over 700t/s in mlc-llm for the same benchmark. So, there is no issue with the benchmark or mlc server mode with CUDA backend. Only AMD/ROCm GPU seems to have no batch inference.
Here is the inference output I get with AMD MI60 when doing batch inference at 65t/s:
mlc_llm serve HF://mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC --overrides "prefill_chunk_size=2048;max_num_sequence=64;context_window_size=4096" --mode server
[2024-10-21 20:23:41] INFO auto_device.py:88: Not found device: cuda:0
[2024-10-21 20:23:42] INFO auto_device.py:79: Found device: rocm:0
[2024-10-21 20:23:42] INFO auto_device.py:79: Found device: rocm:1
[2024-10-21 20:23:43] INFO auto_device.py:88: Not found device: metal:0
[2024-10-21 20:23:44] INFO auto_device.py:79: Found device: vulkan:0
[2024-10-21 20:23:44] INFO auto_device.py:79: Found device: vulkan:1
[2024-10-21 20:23:44] INFO auto_device.py:79: Found device: vulkan:2
[2024-10-21 20:23:44] INFO auto_device.py:79: Found device: vulkan:3
[2024-10-21 20:23:45] INFO auto_device.py:88: Not found device: opencl:0
[2024-10-21 20:23:45] INFO auto_device.py:35: Using device: rocm:0
[2024-10-21 20:23:45] INFO download_cache.py:227: Downloading model from HuggingFace: HF://mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC
[2024-10-21 20:23:45] INFO download_cache.py:29: MLC_DOWNLOAD_CACHE_POLICY = ON. Can be one of: ON, OFF, REDO, READONLY
[2024-10-21 20:23:45] INFO download_cache.py:166: Weights already downloaded: /home/saidp/.cache/mlc_llm/model_weights/hf/mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC
[2024-10-21 20:23:45] INFO jit.py:43: MLC_JIT_POLICY = ON. Can be one of: ON, OFF, REDO, READONLY
[2024-10-21 20:23:45] INFO jit.py:158: Using cached model lib: /home/saidp/.cache/mlc_llm/model_lib/c53c4a7c987b8d7ea642bf287fbe03f6.so
[2024-10-21 20:23:45] INFO engine_base.py:192: The selected engine mode is server. We use as much GPU memory as possible (within the limit of gpu_memory_utilization).
[2024-10-21 20:23:45] INFO engine_base.py:200: If you have low concurrent requests and want to use less GPU memory, please select mode "local".
[2024-10-21 20:23:45] INFO engine_base.py:205: If you don't have concurrent requests and only use the engine interactively, please select mode "interactive".
[20:23:45] /workspace/mlc-llm/cpp/serve/config.cc:688: Under mode "local", max batch size 64 is specified by user, max KV cache token capacity will be set to 4096, prefill chunk size 2048 is specified by user.
[20:23:45] /workspace/mlc-llm/cpp/serve/config.cc:688: Under mode "interactive", max batch size 64 is specified by user, max KV cache token capacity will be set to 4096, prefill chunk size 2048 is specified by user.
[20:23:45] /workspace/mlc-llm/cpp/serve/config.cc:688: Under mode "server", max batch size 64 is specified by user, max KV cache token capacity will be set to 179536, prefill chunk size 2048 is specified by user.
[20:23:45] /workspace/mlc-llm/cpp/serve/config.cc:769: The actual engine mode is "server". So max batch size is 64, max KV cache token capacity is 179536, prefill chunk size is 2048.
[20:23:45] /workspace/mlc-llm/cpp/serve/config.cc:774: Estimated total single GPU memory usage: 27839.176 MB (Parameters: 4308.133 MB. KVCache: 22652.282 MB. Temporary buffer: 878.761 MB). The actual usage might be slightly larger than the estimated number.
INFO: Started server process [40316]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: 127.0.0.1:57132 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:57012 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:57066 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:57038 - "POST /v1/chat/completions HTTP/1.1" 200 OK
...
Expected behavior
I expect batch inference to run at least two times the speed of interactive inference.
Device (e.g. iPhone 12 Pro, PC+RTX 3090, ...): 2xAMD MI60, 1xRTX 3090 (I used only 1xMI60 for batching)
How you installed MLC-LLM (conda, source): python -m pip install --pre -U -f https://mlc.ai/wheels mlc-llm-nightly-rocm62 mlc-ai-nightly-rocm62
How you installed TVM-Unity (pip, source): python -m pip install --pre -U -f https://mlc.ai/wheels mlc-llm-nightly-rocm62 mlc-ai-nightly-rocm62
Python version (e.g. 3.10): Python 3.10.12
GPU driver version (if applicable): NA
CUDA/cuDNN version (if applicable): CUDA Version: 12.4 (only for comparison with RTX 3090)
TVM Unity Hash Tag (python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))", applicable if you compile models):
```text
python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
USE_NVTX: OFF
USE_GTEST: AUTO
SUMMARIZE: OFF
TVM_DEBUG_WITH_ABI_CHANGE: OFF
USE_IOS_RPC: OFF
USE_MSC: OFF
USE_ETHOSU:
CUDA_VERSION: NOT-FOUND
USE_LIBBACKTRACE: AUTO
DLPACK_PATH: 3rdparty/dlpack/include
USE_TENSORRT_CODEGEN: OFF
USE_THRUST: OFF
USE_TARGET_ONNX: OFF
USE_AOT_EXECUTOR: ON
BUILD_DUMMY_LIBTVM: OFF
USE_CUDNN: OFF
USE_TENSORRT_RUNTIME: OFF
USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF
USE_CCACHE: AUTO
USE_ARM_COMPUTE_LIB: OFF
USE_CPP_RTVM:
USE_OPENCL_GTEST: /path/to/opencl/gtest
TVM_LOG_BEFORE_THROW: OFF
USE_MKL: OFF
USE_PT_TVMDSOOP: OFF
MLIR_VERSION: NOT-FOUND
USE_CLML: OFF
USE_STACKVM_RUNTIME: OFF
USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF
ROCM_PATH: /opt/rocm
USE_DNNL: OFF
USE_MSCCL: OFF
USE_NNAPI_RUNTIME: OFF
USE_VITIS_AI: OFF
USE_MLIR: OFF
USE_RCCL: /opt/rocm/
USE_LLVM: /opt/rocm/llvm/bin/llvm-config --ignore-libllvm --link-static
USE_VERILATOR: OFF
USE_TF_TVMDSOOP: OFF
USE_THREADS: ON
USE_MSVC_MT: OFF
BACKTRACE_ON_SEGFAULT: OFF
USE_GRAPH_EXECUTOR: ON
USE_NCCL: OFF
USE_ROCBLAS: OFF
GIT_COMMIT_HASH: dc87019cb805d0a1f0075f6415cc979ef337ec2a
USE_VULKAN: ON
USE_RUST_EXT: OFF
USE_CUTLASS: OFF
USE_CPP_RPC: OFF
USE_HEXAGON: OFF
USE_CUSTOM_LOGGING: OFF
USE_UMA: OFF
USE_FALLBACK_STL_MAP: OFF
USE_SORT: ON
USE_RTTI: ON
GIT_COMMIT_TIME: 2024-09-28 00:31:12 -0400
USE_HIPBLAS: ON
USE_HEXAGON_SDK: /path/to/sdk
USE_BLAS: none
USE_ETHOSN: OFF
USE_LIBTORCH: OFF
USE_RANDOM: ON
USE_CUDA: OFF
USE_COREML: OFF
USE_AMX: OFF
BUILD_STATIC_RUNTIME: OFF
USE_CMSISNN: OFF
USE_KHRONOS_SPIRV: OFF
USE_CLML_GRAPH_EXECUTOR: OFF
USE_TFLITE: OFF
USE_HEXAGON_GTEST: /path/to/hexagon/gtest
PICOJSON_PATH: 3rdparty/picojson
USE_OPENCL_ENABLE_HOST_PTR: OFF
INSTALL_DEV: OFF
USE_PROFILER: ON
USE_NNPACK: OFF
LLVM_VERSION: 18.0.0git
USE_MRVL: OFF
USE_OPENCL: OFF
COMPILER_RT_PATH: 3rdparty/compiler-rt
USE_NNAPI_CODEGEN: OFF
RANG_PATH: 3rdparty/rang/include
USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF
USE_OPENMP: OFF
USE_BNNS: OFF
USE_FLASHINFER:
USE_CUBLAS: OFF
USE_METAL: OFF
USE_MICRO_STANDALONE_RUNTIME: OFF
USE_HEXAGON_EXTERNAL_LIBS: OFF
USE_ALTERNATIVE_LINKER: AUTO
USE_BYODT_POSIT: OFF
USE_NVSHMEM: OFF
USE_HEXAGON_RPC: OFF
USE_MICRO: OFF
DMLC_PATH: 3rdparty/dmlc-core/include
INDEX_DEFAULT_I64: ON
USE_RELAY_DEBUG: OFF
USE_RPC: ON
USE_TENSORFLOW_PATH: none
TVM_CLML_VERSION:
USE_MIOPEN: OFF
USE_ROCM: ON
USE_PAPI: OFF
USE_CURAND: OFF
TVM_CXX_COMPILER_PATH: /opt/rh/gcc-toolset-11/root/usr/bin/c++
HIDE_PRIVATE_SYMBOLS: ON
```
Any other relevant information: AMD 5950x CPU, 96GB 3200Mhz RAM.
🐛 Bug
Hello team, Thanks for creating such an amazing engine. I ran Llama-3-8B-Instruct-q4f16_1-MLC in server mode with different batch sizes (2-128) but I still see my requests are being run sequentially. With interactive chat mode, that model runs at ~80t/s on a single MI60 which is great. But when doing batch inference I expect it to be larger than 80t/s.
To Reproduce
Steps to reproduce the behavior:
python -m pip install --pre -U -f https://mlc.ai/wheels mlc-llm-nightly-rocm62 mlc-ai-nightly-rocm62
mlc_llm serve HF://mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC --mode server
but it was too slow. Then, experimented with different options till I found out this one which worked at around 65t/s when batching:mlc_llm serve HF://mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC --overrides "prefill_chunk_size=2048;max_num_sequence=128;context_window_size=4096" --mode server
git clone https://github.com/chigkim/Ollama-MMLU-Pro.git
python3 run_openai.py --url http://127.0.0.1:8000/v1 --category 'computer science' --verbosity 0 --parallel 64 --model HF://mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC
Each question is less than 4096 tokens. Each question took on average 2s to complete. It seems like mlc-llm is not doing batch inference even though it takes over 85% of 32GB VRAM. In comparison, I ran the same commands (except for 1 which was CUDA specific) RTX 3090 and I got over 700t/s in mlc-llm for the same benchmark. So, there is no issue with the benchmark or mlc server mode with CUDA backend. Only AMD/ROCm GPU seems to have no batch inference.
Here is the inference output I get with AMD MI60 when doing batch inference at 65t/s:
Expected behavior
I expect batch inference to run at least two times the speed of interactive inference.
Environment
conda
, source): python -m pip install --pre -U -f https://mlc.ai/wheels mlc-llm-nightly-rocm62 mlc-ai-nightly-rocm62pip
, source): python -m pip install --pre -U -f https://mlc.ai/wheels mlc-llm-nightly-rocm62 mlc-ai-nightly-rocm62python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models):Additional context
NA.