mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
19.1k stars 1.57k forks source link

[Bug] Error while attempting to build PHI-3(128k) for use in MLC-LLM on the Orange Pi 5 Plus (RK3588) #2307

Closed mjsf12 closed 5 months ago

mjsf12 commented 5 months ago

🐛 Bug

I tried to build this to run the 128k phi-3, to compare it with the pure CPU usage without OpenCL.then I encountered this error.

To Reproduce

Steps to reproduce the behavior:

  1. build mcl-llm and tvm_unity using this and this
  2. Download a Phi-3-mini-128k-instruct-q4f16_1-MLC
  3. try to build PHI-3 mlc_llm compile /home/mjsf12/mlc-llm/dist/prebuilt/Phi-3-mini-128k-instruct-q4f16_1-MLC/mlc-chat-config.json --device opencl -o /home/mjsf12/mlc-llm/dist/prebuilt/lib/phi3/Phi-3-mini-128k-instruct-q4f16_1-mali.so

Error:

mjsf12@orangepi5-plus:~$ ./mlc_llm.sh compile /home/mjsf12/mlc-llm/dist/prebuilt/Phi-3-mini-128k-instruct-q4f16_1-MLC/mlc-chat-config.json --device opencl -o /home/mjsf12/mlc-llm/dist/prebuilt/lib/phi3/Phi-3-mini-128k-instruct-q4f16_1-mali.so Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/mjsf12/mlc-llm/python/mlc_llm/main.py", line 56, in main() File "/home/mjsf12/mlc-llm/python/mlc_llm/main.py", line 23, in main from mlc_llm.cli import compile as cli File "/home/mjsf12/mlc-llm/python/mlc_llm/cli/compile.py", line 10, in from mlc_llm.interface.compile import ( # pylint: disable=redefined-builtin File "/home/mjsf12/mlc-llm/python/mlc_llm/interface/compile.py", line 10, in from tvm import IRModule, relax, tir File "/home/mjsf12/tvm_unity/python/tvm/relax/init.py", line 66, in from .op.base import ( File "/home/mjsf12/tvm_unity/python/tvm/relax/op/init.py", line 21, in from . import _op_gradient, builtin, ccl, distributed, grad, image, memory, nn, op_attrs File "/home/mjsf12/tvm_unity/python/tvm/relax/op/_op_gradient.py", line 130, in def add_grad( File "/home/mjsf12/tvm_unity/python/tvm/ir/op.py", line 241, in _register _ffi_api.RegisterOpAttr(op_name, attr_key, v, level) AttributeError: module 'tvm.ir._ffi_api' has no attribute 'RegisterOpAttr' ompile /home/mjsf12/mlc-llm/dist/prebuilt/Phi-3-mini-128k-instruct-q4f16_1-MLC/mlc-chat-config.json --device opencl --model-type phi3 -o /home/mjsf12/mlc-llm/dist/prebuilt/lib/phi3/Phi-3-mini-128k-instruct-q4f16_1-mali.so Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/mjsf12/mlc-llm/python/mlc_llm/main.py", line 56, in main() File "/home/mjsf12/mlc-llm/python/mlc_llm/main.py", line 23, in main from mlc_llm.cli import compile as cli File "/home/mjsf12/mlc-llm/python/mlc_llm/cli/compile.py", line 10, in from mlc_llm.interface.compile import ( # pylint: disable=redefined-builtin File "/home/mjsf12/mlc-llm/python/mlc_llm/interface/compile.py", line 10, in from tvm import IRModule, relax, tir File "/home/mjsf12/tvm_unity/python/tvm/relax/init.py", line 66, in from .op.base import ( File "/home/mjsf12/tvm_unity/python/tvm/relax/op/init.py", line 21, in from . import _op_gradient, builtin, ccl, distributed, grad, image, memory, nn, op_attrs File "/home/mjsf12/tvm_unity/python/tvm/relax/op/_op_gradient.py", line 130, in def add_grad( File "/home/mjsf12/tvm_unity/python/tvm/ir/op.py", line 241, in _register _ffi_api.RegisterOpAttr(op_name, attr_key, v, level) AttributeError: module 'tvm.ir._ffi_api' has no attribute 'RegisterOpAttr'

Expected behavior

I expected to generate a library to run in the MLCEngine to perform some benchmarks.

Environment

USE_NVTX: OFF USE_GTEST: AUTO SUMMARIZE: OFF TVM_DEBUG_WITH_ABI_CHANGE: OFF USE_IOS_RPC: OFF USE_MSC: OFF USE_ETHOSU: OFF CUDA_VERSION: NOT-FOUND USE_LIBBACKTRACE: AUTO DLPACK_PATH: 3rdparty/dlpack/include USE_TENSORRT_CODEGEN: OFF USE_THRUST: OFF USE_TARGET_ONNX: OFF USE_AOT_EXECUTOR: ON BUILD_DUMMY_LIBTVM: OFF USE_CUDNN: OFF USE_TENSORRT_RUNTIME: OFF USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF USE_CCACHE: AUTO USE_ARM_COMPUTE_LIB: OFF USE_CPP_RTVM: OFF USE_OPENCL_GTEST: /path/to/opencl/gtest TVM_LOG_BEFORE_THROW: OFF USE_MKL: OFF USE_PT_TVMDSOOP: OFF MLIR_VERSION: NOT-FOUND USE_CLML: OFF USE_STACKVM_RUNTIME: OFF USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF ROCM_PATH: /opt/rocm USE_DNNL: OFF USE_MSCCL: OFF USE_VITIS_AI: OFF USE_MLIR: OFF USE_RCCL: OFF USE_LLVM: OFF USE_VERILATOR: OFF USE_TF_TVMDSOOP: OFF USE_THREADS: ON USE_MSVC_MT: OFF BACKTRACE_ON_SEGFAULT: OFF USE_GRAPH_EXECUTOR: ON USE_NCCL: OFF USE_ROCBLAS: OFF GIT_COMMIT_HASH: ced07e88781c0d6416e276d9cd084bb46aaf3da5 USE_VULKAN: OFF USE_RUST_EXT: OFF USE_CUTLASS: OFF USE_CPP_RPC: OFF USE_HEXAGON: OFF USE_CUSTOM_LOGGING: OFF USE_UMA: OFF USE_FALLBACK_STL_MAP: OFF USE_SORT: ON USE_RTTI: ON GIT_COMMIT_TIME: 2024-04-25 21:07:15 -0400 USE_HEXAGON_SDK: /path/to/sdk USE_BLAS: none USE_ETHOSN: OFF USE_LIBTORCH: OFF USE_RANDOM: ON USE_CUDA: OFF USE_COREML: OFF USE_AMX: OFF BUILD_STATIC_RUNTIME: OFF USE_CMSISNN: OFF USE_KHRONOS_SPIRV: OFF USE_CLML_GRAPH_EXECUTOR: OFF USE_TFLITE: OFF USE_HEXAGON_GTEST: /path/to/hexagon/gtest PICOJSON_PATH: 3rdparty/picojson USE_OPENCL_ENABLE_HOST_PTR: OFF INSTALL_DEV: OFF USE_PROFILER: ON USE_NNPACK: OFF LLVM_VERSION: NOT-FOUND USE_MRVL: OFF USE_OPENCL: ON COMPILER_RT_PATH: 3rdparty/compiler-rt RANG_PATH: 3rdparty/rang/include USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF USE_OPENMP: none USE_BNNS: OFF USE_FLASHINFER: OFF USE_CUBLAS: OFF USE_METAL: OFF USE_MICRO_STANDALONE_RUNTIME: OFF USE_HEXAGON_EXTERNAL_LIBS: OFF USE_ALTERNATIVE_LINKER: AUTO USE_BYODT_POSIT: OFF USE_HEXAGON_RPC: OFF USE_MICRO: OFF DMLC_PATH: 3rdparty/dmlc-core/include INDEX_DEFAULT_I64: ON USE_RELAY_DEBUG: OFF USE_RPC: ON USE_TENSORFLOW_PATH: none TVM_CLML_VERSION: USE_MIOPEN: OFF USE_ROCM: OFF USE_PAPI: OFF USE_CURAND: OFF TVM_CXX_COMPILER_PATH: /usr/bin/c++ HIDE_PRIVATE_SYMBOLS: OFF

Additional context

mlc_llm.sh:

!/bin/bash

export TVM_HOME=/home/mjsf12/tvm_unity export MLC_LLM_HOME=/home/mjsf12/mlc-llm export PYTHONPATH=$TVM_HOME/python:$MLC_LLM_HOME/python:${PYTHONPATH} python3 -m mlc_llm $@

I apologize for any confusion with my English; it's still a work in progress, and I had help from LLMs.

tqchen commented 5 months ago

please make sure your compiled tvm completely through the instruction, instead of just the runtime part

mjsf12 commented 5 months ago

It was missing LLVM, I installed the dependency and rebuilt, and now it seems to work.

ollmer commented 5 months ago

@mjsf12, have you collected performance metrics for both CPU and GPU modes? It would be very interesting to see.